Rethinking Swap Space

A long time ago, in a galaxy far, far away, somebody decided on a general rule for handling virtual memory in a computer that went something like this:

“Thou shalt have one and a half to two times the amount of swap space on a hard disk as thou hast RAM in thy computer.”

I think it’s time to revisit this rule from On High with an eye towards real-life numbers.

A long time ago, in a galaxy far, far away, somebody decided on a general rule for handling virtual memory in a computer that went something like this:

“Thou shalt have one and a half to two times the amount of swap space on a hard disk as thou hast RAM in thy computer.”

I think it’s time to revisit this rule from On High with an eye towards real-life numbers.

OK, first thing’s first. RAM is the thing which stores temporary data in your computer. When you see the splash screen when Windows is booting up, what it’s doing is loading data from your hard disk drive into the RAM of your computer.

As most people know, the Windows startup can take a long time. Even on modern, uber-fast machines, you are looking at a 15-30 second startup time, minimum, from the moment you see the Windows splash screen. The vast majority of the time, the computer is just waiting for data to load from the hard drive so that it can execute it and continue on its merry way with booting up.

Once that data is loaded, your computer is pretty speedy at handling it. As a matter of fact, one of the most common upgrades on computers is to add more RAM. Simply creating more of this temporary storage space makes an enormous difference in your computer’s performance (to a point), and it’s one of the cheapest upgrades you can perform. So much so that investing in a faster graphics card, faster hard drive, or faster CPU generally takes a back-seat to having more RAM in your machine.

The throughput on modern RAM is pretty amazing, too. Some of the more recent technology is staggering! The memory is running at 400 million cycles per second, and usually follows a “refresh” schedule that’s something like 4-3-3-3 (if we’re conservative). That means that out of every 13 CPU cycles, your computer can read or write data for 9 of them. What’s this mean in real life?

9/13 * 400 million cycles per second * 64 bits simultaneous = something like 2 gigabytes per second.

So a computer’s RAM can read and write somewhere around 2 gigabytes every single second (2 billion bytes).

In contrast, the hard disk of a system can deliver maybe 50 megabytes (50 million bytes) per second, and if you have to write to it? Try more like ten, or maybe fifteen, megabytes per second.

In other words, DOG SLOW.

Now, back in the day, memory was extremely expensive. Per megabyte, hard disk drives were much cheaper. Conventional wisdom was that, if you had 4 megabytes of memory in your machine, you should have about 8 megabytes of “swap space” (pretend memory, really) on your hard disk drive. Keeping in mind that hard disk drives have only been increasing in speed rather linearly, while memory access speeds have increased logarithmically (in an increasing curve), this makes sense. Small amount of memory, pretty small swap space.

Fact was, performance was acceptable.

These days, your average new PC ships with 1GB to 2GB of RAM. Realistically, 512MB is a practical MINIMUM for effective operation, and there’s a substantial speed boost going to 1GB from 512MB. Many workstations are shipping with 4GB, and the only reason we don’t commonly have workstations with a lot more RAM than that is because it’s pretty rare (and expensive) to run a 64-bit version of Windows on a 64-bit workstation. 32-bit machines can’t address more than 4GB of RAM efficiently.

So where I work, and elsewhere, I still see people following the “twice as much swap as memory” rule. This means that I see machines every day with 2GB to as much as 8GB of swap space.

Do the math on that, folks. If you are just trying to READ 4GB of swap, it’s going to take eighty seconds to read it, minimum! Considering that generally the machine is also trying to write to the “virtual memory” on a swap partition or file at the same time as it’s reading, multiply that by at least 2, and more like 3 or 4 if the disk is a standard slow-writing consumer disk.

Basically, if the computer is actually using more than a few hundred megabytes of swap space, it has slowed to a crawl, and will be almost completely unusable.

I hereby propose a new rule:

Thy swap space shall equal the maximum amount of data your storage subsystem can deliver within 10 seconds or less.

Now, think about the practical ramifications. If your hard disk can read 50 megabytes a second (like most modern 7200 RPM IDE drives), OK. Get yourself a 500 megabyte swap file. If you have a nifty RAID array which can deliver 100 megabytes a second, fine. Go for a 1GB swap file.

BTW: I’m aware of special cases like Solaris, where /tmp is actually your swap, so it has a direct effect on performance to have this filesystem be quite large. I don’t agree with that particular architectural decision, though… /tmp should be its own space.

The age of large swap files or swap partitions is dead. Let it rest in peace.

One thought on “Rethinking Swap Space”

  1. Reason for large swap file

    After some private mail with an individual who is a proponent of large swap sizes, I will finally grant one reason — and one reason only! — for running with a swap size of at least 1.5X the amount of RAM on your system:

    If you are a developer, and expect to possibly create kernel panics due to your code, you should devote at least 1.5X the amount of physical RAM on your system to your swap file for dumping the stack to swap when the kernel panics. I would say that most programmers don’t know how to read a kernel stack dump, so if you’re one of the few who need this option, you already know why you need it. If you don’t allocate more hard disk space than you have RAM available, you will end up with only a partial dump, which will probably be useless to you.

    The reason it’s still advocated by certain software companies (like the very large one I work for) is, ultimately, to make our jobs easier. Yep, the reason we list a minimum of 2X your RAM size in swap as a requirement to install our software is to cover our A$$ET$. If you are running our software and your kernel panics, we want to be able to get the stack dump and analyze it in order to figure out what caused the panic.

    I just got some personal experience with this today. Digging through a gigabyte+ of stack dump to figure out why a machine keeps dumping is not anybody’s idea of a good time.


    Matthew P. Barnson

Comments are closed.