Re: Virtual Memory

  • From: "Mark Brinsmead" <pythianbrinsmead@xxxxxxxxx>
  • To: kevinc@xxxxxxxxxxxxx
  • Date: Fri, 25 Aug 2006 20:19:54 -0600

Just to build on what Kevin has said: there have historically been two major
methods for allocating swap space under various flavours of UNIX.  With
exceptions, of course -- as I recall AIX3 implemented a rather odd
combination of the two.

In the first "school", swap space is allocated proactively.  That is,
whenever a process is started (or requests more memory) swap space is
allocated immediately, even if nothing is (perhaps ever!) written to it.
Under this system, the OS is always guaranteed to have enough swap space on
hand to page out any process should physical RAM run short.  Under this
model, (usable) virtual memory is exactly equal to the size of the total swap
space; if you have more RAM than swap, you will be unable to use it all.

In the second "school", swap space is allocated only as needed.  This was
popular back in the good olt days when 1GB of RAM may have cost $100,000 or
more, but 1GB of disk also cost as much as $5,000 or $10,000 -- even sites
who could afford 2 or 3 GB or RAM would complain bitterly about the cost of
an additional 2 or 3 GB of disk space for swap that  they figured they would
never use.  Now that disk storage costs closer to $1 per gigabyte, people
care a little less...

Anyway, with the "swap on demand" school of thought, swap space would only
be allocated when needed, that is, when physical RAM is (almost) exhausted.
When physical memory runs short, the OS will look for a process to page
out.  If there was available swap space, the process would be paged (or
swapped) out [[paging and swapping are not the same thing, bit there is no
need to go into that right now]] and life goes on.  If there is not
sufficient swap space available, the OS has two choices:

  1. Panic (crash), or
  2. Choose a process to kill, kill it, and carry on.

Under this method, available "virtual" memory is (almost) the SUM of
physical RAM + configured swap space.

Interestingly enough, in the "allocate-swap-on-demand" camp, the most common
"algorithm" I recall for deciding which process to kill was quite simple.
In order to mimimise the number of processes to be killed, the OS would
always kill the one with the largest memory footprint first.  On a database
server, you can probably guess what this will be...

Anyway, this is based on memories that are now rather hazy.  I'm sure you'll
still find representatives of both schools.  And maybe still hybrids, too.
Although with the cost of disk storage being what it is, I would expect few
OS implementors to choose to sacrifice reliability/stability for the sake of
saving customers $5 or $10 worth or storage...


On 8/25/06, Kevin Closson <kevinc@xxxxxxxxxxxxx> wrote:

>>> >>>A decent OS shouldn't start paging much before it starts >>>running out of physical memory,


...well, there is a difference between paging and allocating swap in the event paging is required. The jury is out on OSes that allocate swap only at the point when some major page faults or process swaps (same mechanisms usually) need to occur. If there doesn't happen to be enough swap space when such an on-demand allocation takes place, the only resort for the OS is to start killing "stuff"... what "stuff" is best to kill?

Way beck when, there were Unix derivations that didn't
like this idea so they allocated swap within the page
allocation code so there would never be a desperation swap
failure... if a process has a page, there is a page in
swap behind it.

I never could grasp the idea of just picking a process and killing
it because the system is low on swap...especially since the
best candidate to kill is one that has a significant amount
of virtual memory...


-- //www.freelists.org/webpage/oracle-l





--
Cheers,
-- Mark Brinsmead
  Staff DBA,
  The Pythian Group
  http://www.pythian.com/blogs

Other related posts: