thrashing, paging, swap file

23
6.7 Thrashing Thrashing is a degenerate case that occurs when there is insufficient memory at one level in the memory hierarchy to properly contain the working set required by the upper levels of the memory hierarchy. This can result in the overall performance of the system dropping to the speed of a lower level in the memory hierarchy. Therefore, thrashing can quickly reduce the performance of the system to the speed of main memory or, worse yet, the speed of the disk drive. There are two primary causes of thrashing: (1) insufficient memory at a given level in the memory hierarchy, and (2) the program does not exhibit locality of reference. If there is insufficient memory to hold a working set of pages or cache lines, then the memory system is constantly replacing one block (cache line or page) with another. As a result, the system winds up operating at the speed of the slower memory in the hierarchy. A common example occurs with virtual memory. A user may have several applications running at the same time and the sum total of these programs' working sets is greater than all of physical memory available to the program. As a result, as the operating system switches between the applications it has to copy each application's data to and from disk and it may also have to copy the code from disk to memory. Since a context switch between programs is often much faster than retrieving data from the disk, this slows the programs down by a tremendous factor since thrashing slows the context switch down to the speed of swapping the applications to and from disk. If the program does not exhibit locality of reference and the lower memory subsystems are not fully associative, then thrashing can occur even if there is free memory at the current level in the memory hierarchy. For example, suppose an eight kilobyte L1 caching system uses a direct-mapped cache with 16-byte cache lines (i.e., 512 cache lines). If a program references data objects 8K apart on each access then the system will have to replace the same line in the cache over and over again with each access. This occurs even though the other 511 cache lines are currently unused. If insufficient memory is the cause of thrashing, an easy solution is to add more memory (if possible, it is rather hard to add more L1 cache when the cache is on the same chip as the processor). Another alternative is to run fewer processes concurrently or modify the program so that it references less memory over a given time period. If lack of locality of reference is causing the problem, then you should restructure your program and its data structures to make references local to one another.

Upload: khaira-al-hafi

Post on 29-Nov-2014

342 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Thrashing, Paging, Swap File

6.7 Thrashing

Thrashing is a degenerate case that occurs when there is insufficient memory at one level in the memory hierarchy to properly contain the working set required by the upper levels of the memory hierarchy. This can result in the overall performance of the system dropping to the speed of a lower level in the memory hierarchy. Therefore, thrashing can quickly reduce the performance of the system to the speed of main memory or, worse yet, the speed of the disk drive.

There are two primary causes of thrashing: (1) insufficient memory at a given level in the memory hierarchy, and (2) the program does not exhibit locality of reference. If there is insufficient memory to hold a working set of pages or cache lines, then the memory system is constantly replacing one block (cache line or page) with another. As a result, the system winds up operating at the speed of the slower memory in the hierarchy. A common example occurs with virtual memory. A user may have several applications running at the same time and the sum total of these programs' working sets is greater than all of physical memory available to the program. As a result, as the operating system switches between the applications it has to copy each application's data to and from disk and it may also have to copy the code from disk to memory. Since a context switch between programs is often much faster than retrieving data from the disk, this slows the programs down by a tremendous factor since thrashing slows the context switch down to the speed of swapping the applications to and from disk.

If the program does not exhibit locality of reference and the lower memory subsystems are not fully associative, then thrashing can occur even if there is free memory at the current level in the memory hierarchy. For example, suppose an eight kilobyte L1 caching system uses a direct-mapped cache with 16-byte cache lines (i.e., 512 cache lines). If a program references data objects 8K apart on each access then the system will have to replace the same line in the cache over and over again with each access. This occurs even though the other 511 cache lines are currently unused.

If insufficient memory is the cause of thrashing, an easy solution is to add more memory (if possible, it is rather hard to add more L1 cache when the cache is on the same chip as the processor). Another alternative is to run fewer processes concurrently or modify the program so that it references less memory over a given time period. If lack of locality of reference is causing the problem, then you should restructure your program and its data structures to make references local to one another.

1Actually, virtual memory is really only supported by the 80386 and later processors. We'll ignore this issue here since most people have an 80386 or later processor.

2Strictly speaking, you actually get a 36-bit address space on Pentium Pro and later processors, but Windows and Linux limits you to 32-bits so we'll use that limitation here.

http://webster.cs.ucr.edu/AoA/Windows/HTML/MemoryArchitecturea3.html

Page 2: Thrashing, Paging, Swap File

Thrashing occurs when a processor spends so much of its time transferring pages between virtual memory (swap file on disk) and physical memory (RAM) that it has little time to do anything else. It is often a sign that a user is trying to do more things at the same time than the physical memory of his/her computer can realistically sustain.

http://www.blurtit.com/q1928693.html

Page 3: Thrashing, Paging, Swap File

8.8 THRASHING

Misalnya sembarang proses tidak mempunyai frame yang cukup. Meskipun

secara teknis dapat mengurangi jumlah frame yang dialokasikan sampai minimum,

terdapat sejumlah page yang sedang aktif digunakan. Jika suatu proses tidak memiliki

jumlah frame yang cukup, maka sering terjadi page fault. Sehingga harus mengganti

beberapa page. Tetapi karena semua page sedang digunakan, harus mengganti page

yang tidak digunakan lagi kemudian. Konsekuensinya, sering terjadi page fault lagi dan

lagi. Proses berlanjut page fault, mengganti page untuk page fault dan seterusnya.

Kegiatan aktifitas paging yang tinggi disebut thrashing. Sebuah proses

mengalami thrashing jika menghabiskan lebih banyak waktu untuk paging daripada

eksekusi. Efek thrashing dapat dibatasi dengan menggunakan algoritma local (priority)

replacement. Grafik terjadinya proses thrashing pada sistem multiprogramming dapat

dilihat pada Gambar 8-13. BAB 8 MEMORI VIRTUAL 16

http://lecturer.eepis-its.edu/~arna/Diktat_SO/8.Virtual%20Memory.pdf

Page 4: Thrashing, Paging, Swap File

THRASHING

Jika suatu proses tidak memiliki frame yang cukup, walau pun kita memiliki kemungkinan untuk mengurangi banyaknya frame yang dialokasikan menjadi minimum, tetap ada halaman dalam jumlah besar yang memiliki kondisi aktif menggunakannya. Maka hal ini akan mengakibatkan kesalahan halaman. Pada kasus ini, kita harus mengganti beberapa halaman menjadi halaman yang dibutuhkan walau pun halaman yang diganti pada waktu dekat akan dibutuhkan lagi. Hal ini mengakibatkan kesalahan terus menerus.

Aktivitas yang tinggi dari paging disebut thrashing. Suatu proses dikatakan thrashing jika proses menghabiskan waktu lebih banyak untuk paging daripada eksekusi (proses sibuk untuk melakukan swap-in swap-out).

Penyebab Thrashing

Penyebab dari thrashing adalah utilisasi CPU yang rendah. Jika utilisasi CPU terlalu rendah, kita menambahkan derajat dari multiprogramming dengan menambahkan proses baru ke sistem.

Sejalan dengan bertambahnya derajat dari multiprogramming, utilisasi CPU juga bertambah dengan lebih lambat sampai maksimumnya dicapai. Jika derajat dari multiprogramming ditambah terus menerus, utilisasi CPU akan berkurang dengan drastis dan terjadi thrashing. Untuk menambah utilisasi CPU dan menghentikan thrashing, kita harus mengurangi derajat dari multiprogramming.

mohiqbal.staff.gunadarma.ac.id/Downloads/.../08.Memori+Virtual.ppt 

Page 5: Thrashing, Paging, Swap File

9.7 Thrashing9.7.1 PendahuluanPada saat suatu proses tidak memiliki cukup bingkai untuk mendukung halaman yang akan digunakan maka akan sering terjadi page fault sehingga harus dilakukan penggantian halaman. Thrashing adalah keadaan dimana proses sibuk untuk mengganti halaman yang dibutuhkan secara terus menerus, seperti ilustrasi di bawah ini.

Pada gambar terlihat CPU utilization meningkat seiring meningkatnya derajat multiprogramming, sampai pada suatu titik CPU utilization menurun drastis, di titik ini thrashing dapat dihentikan dengan menurunkan derajat multiprograming.Pada saat CPU utilization terlalu rendah, maka sistem operasi akan meningkatkan derajat multiprogrammingdengan cara menghasilkan proses-proses baru, dalam keadaan ini algoritma penggantian global akan digunakan. Ketika proses membutuhkan bingkai yang lebih, maka akan terjadi page fault yang menyebabkanCPU utilization semakin menurun. Ketika sistem operasi mendeteksi hal ini, derajat multiprogramming makin ditingkatkan, yang menyebabkan CPU utilization kembali menurun drastis, hal ini yang menyebabkanthrashing.Untuk membatasi efek thrashing dapat menggunakan algoritma penggantian lokal. Dengan algoritma penggantian lokal, jika terjadi thrashing, proses tersebut dapat menggambil bingkai dari proses lain dan menyebabkan proses tersebut tidak mengalami thrashing. Salah satu cara untuk menghindari thrashingadalah dengan cara menyediakan jumlah bingkai yang pas sesuai dengan kebutuhan proses tersebut. Salah satu cara untuk mengetahui jumlah bingkai yang diperlukan pada suatu proses adalah dengan strategiworking set.

http://outofthebox.students-blog.undip.ac.id/2010/09/27/so-chapter-9-virtual-memory/

Page 6: Thrashing, Paging, Swap File

Sistem

operasi modern umumnya menggunakan apa yang dikatakan swap file atau page file.

Page file adalah bagian dari media penyimpan eksternal (diluar motherboard) yang

diperlakukan seperti RAM. Dalam sistem operasi seperti ini setiap program mempunyai

wilayah memori sendiri (memory space) yang hanya dapat digunakan oleh program itu

sendiri, tidak dapat digunakan oleh program lain. Memory space ini adalah bagian dari

RAM namun bisa juga sebagian berada pada swap file, hal ini diatur oleh bagian

PDF created with pdfFactory trial version www.pdffactory.commanajemen memori dari sistem operasi. Wilayah memori ini dibagi ke dalam beberapa

komponen lagi, yaitu:

o Global namespace, bagian ini adalah bagian memory space yang dapat

diakses oleh program yang kita miliki dengan bebas.

o Code Space, bagian ini menyimpan kode program yang akan dieksekusi

(machine code) yang kita hasilkan dari source code program yang sudah di-compile..

Bagian ini adalah bagian yang read only, sebab kita hanya dapat mengeksekusi

sebuah program, bukan merubah kode program itu sendiri saat dia dijalankan.

o Stack, yaitu bagian dari memory yang berfungsi untuk menyimpan data

sementara, saat sebuah fungsi sedang dipanggil dan diproses.

o Register, yaitu bagian dari Microprocessor yang kita miliki yang dapat

menyimpan data sementara ketika program sedang dijalankan. Register-register inilah

yang memelihara keadaan sistem kita agar tetap berjalan sebagaimana mestinya.

Tentang register akan dijelaskan lebih lanjut pada pemrograman assembly. Yang

perlu kita ketahui saat ini adalah beberapa register yang bertugas menyimpan alamat

kode program yang sedang dieksekusi saat ini, register ini disebut sebagai instruction

pointer, selain itu ada register yang berfungsi menunjukkan alamat stack yang

digunakan saat ini, register ini disebut stack pointer dan ada pula register yang

Page 7: Thrashing, Paging, Swap File

bertugas menyimpan sementara alamat yang dimiliki oleh stack pointer, register ini

disebut stack frame.

o Free store, adalah bagian memory di luar bagian memory yang telah di

sebut di atas.

http://bebas.vlsm.org/v06/Kuliah/MTI-PSOKS/2005/PSOSK-03-Representasi_Data.pdf

Page 8: Thrashing, Paging, Swap File

Swap File

Swap File ata page file adalah sebuah file yang diciptakan oleh sistem operasi di dalam hard disk

sebagai tempat penampungan sementara informasi dari RAM. Swap file sangat m embantu sistem yang

memiliki kapasitas RAM yang minim.

http://perpus.kampoengti.com/kamus-ti/r_487/swap-file

Page 9: Thrashing, Paging, Swap File

Definisi 'paging'

English to Englishnoun

1. calling out the name of a person (especially by a loudspeaker system) 

the public address system in the hospital was used for paging

source: wordnet30

2. the system of numbering pages 

source: wordnet30

3. The marking or numbering of the pages of a book. 

http://www.artikata.com/arti-131640-paging.php

Page 10: Thrashing, Paging, Swap File

PagingFrom Wikipedia, the free encyclopedia

This article is about computer virtual memory. For the wireless communication devices, see "Pager". Bank

switching is also called paging. Page flipping is also called paging.

In computer operating systems, paging is one of the memory-management schemes by which a computer can

store and retrieve data from secondary storage for use in main memory. In the paging memory-management

scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. The

main advantage of paging is that it allows the physical address space of a process to be noncontiguous. Before

the time paging was used, systems had to fit whole programs into storage contiguously, which caused

various storage and fragmentation problems.[1]

Paging is an important part of virtual memory implementation in most contemporary general-purpose operating

systems, allowing them to use disk storage for data that does not fit into physical random-access

memory (RAM).

Contents

[hide]

1 Overview

o 1.1 Demand paging

o 1.2 Anticipatory paging

o 1.3 Free page queue

o 1.4 Page stealing

o 1.5 Swap prefetch

o 1.6 Pre-cleaning

2 Thrashing

3 Terminology

4 Implementations

o 4.1 Ferranti Atlas

o 4.2 Windows 3.x and Windows 9x

o 4.3 Windows NT

4.3.1 Fragmentation

o 4.4 Unix and Unix-like systems

4.4.1 Linux

Page 11: Thrashing, Paging, Swap File

4.4.2 Mac OS X

4.4.3 Solaris

o 4.5 AmigaOS 4

5 Performance

6 Tuning swap space size

7 Reliability

8 Addressing limits on 32 bit hardware

9 See also

10 Notes

11 References

12 External links

[edit]Overview

The main functions of paging are performed when a program tries to access pages that are not currently

mapped to physical memory (RAM). This situation is known as a page fault. The operating system must then

take control and handle the page fault, in a manner invisible to the program. Therefore, the operating system

must:

1. Determine the location of the data in auxiliary storage.

2. Obtain an empty page frame in RAM to use as a container for the data.

3. Load the requested data into the available page frame.

4. Update the page table to show the new data.

5. Return control to the program, transparently retrying the instruction that caused the page fault.

Because RAM is faster than auxiliary storage, paging is avoided until there is not enough RAM to store all the

data needed. When this occurs, a page in RAM is moved to auxiliary storage, freeing up space in RAM for use.

Thereafter, whenever the page in secondary storage is needed, a page in RAM is saved to auxiliary storage so

that the requested page can then be loaded into the space left behind by the old page. Efficient paging systems

must determine the page to swap by choosing one that is least likely to be needed within a short time. There

are various page replacement algorithms that try to do this.

Most operating systems use some approximation of the least recently used (LRU) page replacement algorithm

(the LRU itself cannot be implemented on the current hardware) or working set based algorithm.

If a page in RAM is modified (i.e. if the page becomes dirty) and then chosen to be swapped, it must either be

written to auxiliary storage, or simply discarded.

Page 12: Thrashing, Paging, Swap File

To further increase responsiveness, paging systems may employ various strategies to predict what pages will

be needed soon so that it can preemptively load them.

[edit]Demand paging

Main article: Demand paging

When demand paging is used, no preemptive loading takes place. Paging only occurs at the time of the data

request, and not before. In particular, when a demand pager is used, a program usually begins execution with

none of its pages pre-loaded in RAM. Pages are copied from the executable file into RAM the first time the

executing code references them, usually in response to page faults. As such, much of the executable file might

never be loaded into memory if pages of the program are never executed during that run.

[edit]Anticipatory paging

This technique preloads a process's non-resident pages that are likely to be referenced in the near future

(taking advantage of locality of reference). Such strategies attempt to reduce the number of page faults a

process experiences.

[edit]Free page queue

The free page queue is a list of page frames that are available for assignment after a page fault. Some

operating systems[NB 1] support page reclamation; if a page fault occurs for a page that had been stolen and the

page frame was never reassigned, then the operating system avoids the necessity of reading the page back in

by assigning the unmodified page frame.

[edit]Page stealing

Some operating systems periodically look for pages that have not been recently referenced and add them to

the Free page queue, after paging them out if they have been modified.

[edit]Swap prefetch

A few operating systems use anticipatory paging, also called swap prefetch. These operating systems

periodically attempt to guess which pages will soon be needed, and start loading them into RAM. There are

various heuristics in use, such as "if a program references one virtual address which causes a page fault,

perhaps the next few pages' worth of virtual address space will soon be used" and "if one big program just

finished execution, leaving lots of free RAM, perhaps the user will return to using some of the programs that

were recently paged out".

[edit]Pre-cleaning

Unix operating systems periodically use sync to pre-clean all dirty pages, that is, to save all modified pages to

hard disk. Windows operating systems do the same thing via "modified page writer" threads.

Page 13: Thrashing, Paging, Swap File

Pre-cleaning makes starting a new program or opening a new data file much faster. The hard drive can

immediately seek to that file and consecutively read the whole file into pre-cleaned page frames. Without pre-

cleaning, the hard drive is forced to seek back and forth between writing a dirty page frame to disk, and then

reading the next page of the file into that frame.

[edit]Thrashing

Main article: Thrashing (computer science)

Most programs reach a steady state in their demand for memory locality both in terms of instructions fetched

and data being accessed. This steady state is usually much less than the total memory required by the

program. This steady state is sometimes referred to as the working set: the set of memory pages that are most

frequently accessed.

Virtual memory systems work most efficiently when the ratio of the working set to the total number of pages

that can be stored in RAM is low enough that the time spent resolving page faults is not a dominant factor in

the workload's performance. A program that works with huge data structures will sometimes require a working

set that is too large to be efficiently managed by the page system resulting in constant page faults that

drastically slow down the system. This condition is referred to as thrashing: pages are swapped out and then

accessed causing frequent faults.

An interesting characteristic of thrashing is that as the working set grows, there is very little increase in the

number of faults until the critical point (when faults go up dramatically and majority of system's processing

power is spent on handling them).

An extreme example of this sort of situation occurred on the IBM System/360 Model 67 and IBM

System/370 series mainframe computers, in which a particular instruction could consist of an execute

instruction, which crosses a page boundary, that the instruction points to a move instruction, that itself also

crosses a page boundary, targeting a move of data from a source that crosses a page boundary, to a target of

data that also crosses a page boundary. The total number of pages thus being used by this particular

instruction is eight, and all eight pages must be present in memory at the same time. If the operating system

will allocate less than eight pages of actual memory in this example, when it attempts to swap out some part of

the instruction or data to bring in the remainder, the instruction will again page fault, and it will thrash on every

attempt to restart the failing instruction.

To decrease excessive paging, and thus possibly resolve thrashing problem, a user can do any of the

following:

Increase the amount of RAM in the computer (generally the best long-term solution).

Decrease the number of programs being concurrently run on the computer.

Page 14: Thrashing, Paging, Swap File

The term thrashing is also used in contexts other than virtual memory systems, for example to

describe cache issues in computing or silly window syndrome in networking.

[edit]Terminology

Historically, paging sometimes referred to a memory allocation scheme that used fixed-length pages as

opposed to variable-length segments, without implicit suggestion that virtual memory technique were employed

at all or that those pages were transferred to disk.[2] [3] Such usage is rare today.

Some modern systems use the term swapping along with paging. Historically, swapping referred to moving

from/to secondary storage a whole program at a time, in a scheme known asroll-in/roll-out. [4] [5] In the 1960s,

after the concept of virtual memory was introduced—in two variants, either using segments or pages—the

term swapping was applied to moving, respectively, either segments or pages, between disk and memory.

Today with the virtual memory mostly based on pages, not segments, swapping became a fairly close synonym

ofpaging, although with one difference.[dubious – discuss]

In many popular systems, there is a concept known as page cache, of using the same single mechanism

for both virtual memory and disk caching. A page may be then transferred to or from any ordinary disk file, not

necessarily a dedicated space. Page in is transferring a page from the disk to RAM. Page out is transferring a

page from RAM to the disk. Swap in and outonly refer to transferring pages between RAM and dedicated swap

space or swap file, and not any other place on disk.

On Windows NT based systems, dedicated swap space is known as a page file and paging/swapping are often

used interchangeably.

http://en.wikipedia.org/wiki/Paging

Page 15: Thrashing, Paging, Swap File

PagingIntroduction

Paging is another memory management technique which widely uses virtual memory concept. When paging is used, the processor divides the linear address space into fixed-size pages (of 4KBytes, 2 MBytes, or 4 MBytes in length) that can be mapped into physical memory and/or disk storage. When a program (or task) references a logical address in memory, the processor translates the address into a linear address and then uses its paging mechanism to translate the linear address into a corresponding physical address.

Linear Page Translation during Paging

If the page containing the linear address is not currently in physical memory, the processor generates a page-fault exception (#14). The exception handler for the page-fault exception typically directs the operating system to load the page from disk storage into physical memory. When the page has been loaded in physical memory, a return from the exception handler causes the instruction that generated the exception to be restarted. The information that the processor uses to map linear addresses into the physical address space and to generate page-fault exceptions (when necessary) is contained in page directories and page tables stored in memory. 

Page-Directory & Page-Table Entries

Page 16: Thrashing, Paging, Swap File

 

Advantages of paging

Address translation: each task has the same virtual address Address translation: turns fragmented physical addresses into contiguous virtual

addresses Memory protection (buggy or malicious tasks can't harm each other or the kernel) Shared memory between tasks (a fast type of IPC, also conserves memory when used

for DLLs) Demand loading (prevents big load on CPU when a task first starts running,

conserves memory) Memory mapped files Virtual memory swapping (lets system degrade gracefully when memory required

exceeds RAM size)

http://viralpatel.net/taj/tutorial/paging.php

Page 17: Thrashing, Paging, Swap File

Virtual memoryFrom Wikipedia, the free encyclopedia

This article is about the computational technique. For the TBN game show, see Virtual Memory (game show).

This article may require copy editing for grammar, style, cohesion, tone, or spelling. You can assist by editing it. (May 2010)

This article includes a list of references, but its sources remain unclear because it has insufficient inline citations.Please help to improve this article by introducing more precise citations where appropriate. (December 2010)

The program thinks it has a large range of contiguous addresses, but in reality the parts it is currently using are scattered

around RAM, and the inactive parts are saved in a disk file.

In computing, virtual memory is a memory management technique developed for multitasking kernels. This

technique virtualizes acomputer architecture's various hardware memory devices (such as RAM modules

and disk storage drives), allowing a program to bedesigned as though:

there is only one hardware memory device and this "virtual" device acts like a RAM module.

Page 18: Thrashing, Paging, Swap File

the program has, by default, sole access to this virtual RAM module as the basis for a contiguous working

memory (an address space).

Systems that employ virtual memory:

use hardware memory more efficiently than systems without virtual memory.[citation needed]

make the programming of applications easier by:

hiding fragmentation.

delegating to the kernel the burden of managing the memory hierarchy; there is no need for the

program to handle overlaysexplicitly.

obviating the need to relocate program code or to access memory with relative addressing.

Memory virtualization is a generalization of the concept of virtual memory.

Virtual memory is an integral part of a computer architecture; all implementations

(excluding[dubious – discuss] emulators and virtual machines) require hardware support, typically in the form of

a memory management unit built into the CPU. Consequently, older operating systems (such as DOS [1]  of the

1980s or those for the mainframes of the 1960s) generally have no virtual memory functionality[dubious – discuss],

though notable exceptions include the Atlas, B5000, IBM System/360 Model 67, IBM System/370mainframe

systems of the early 1970s, and the Apple Lisa project circa 1980.

Embedded systems and other special-purpose computer systems that require very fast and/or very consistent

response times may opt not to use virtual memory due to decreased determinism; virtual memory systems

trigger unpredictable interrupts that may produce unwanted "jitter" during I/O operations. This is because

embedded hardware costs are often kept low by implementing all such operations with software (a technique

called bit-banging) rather than with dedicated hardware. In any case, embedded systems usually have little use

for complicated memory hierarchies.

en.wikipedia.org/wiki/Virtual_memory

Page 19: Thrashing, Paging, Swap File

What is a swap file?

A swap file allows an operating system to use hard disk space to simulate extra memory. When the system runs low on memory, it swaps a section of RAM that an idle program is using onto the hard disk to free up memory for other programs. Then when you go back to the swapped out program, it changes places with another program in RAM. This causes a large amount of hard disk reading and writing that slows down your computer considerably.

This combination of RAM and swap files is known as virtual memory. The use of virtual memory allows your computer to run more programs than it could run in RAM alone.

The way swap files are implemented depends on the operating system. Some operating systems, like Windows, can be configured to use temporary swap files that they create when necessary. The disk space is then released when it is no longer needed. Other operating systems, like Linux and Unix, set aside a permanent swap space that reserves a certain portion of your hard disk.

Permanent swap files take a contiguous section of your hard disk while some temporary swap files can use fragmented hard disk space. This means that using a permanent swap file will usually be faster than using a temporary one. Temporary swap files are more useful if you are low on disk space because they don't permanently reserve part of your hard disk.

https://kb.iu.edu/data/aagb.html

Page 20: Thrashing, Paging, Swap File

Swap file

Alternatively referred to as a page file or paging file, a swap file is a file stored on the computer hard disk drivethat is used as a temporary location to store information that is not currently being used by the computer RAM. By using a swap file a computer has the ability to use more memory than what is physically installed in the computer. However, users who are low on hard disk space may notice that the computer runs slower because of the inability of the swap file to grow in size.

It is perfectly normal for the swap file or page file to grow in size, sometimes growing several hundred megs in size. Below is a listing of common Microsoft operating system swap file information; however, it is important to realize that this information may vary. Finally, by default the swap files are hidden.

http://www.computerhope.com/jargon/s/swapfile.htm