storage and file structure

Upload: dilum-maddumage

Post on 16-Oct-2015

30 views

Category:

Documents


0 download

DESCRIPTION

Data Storage and File structures

TRANSCRIPT

  • 5/26/2018 Storage and File Structure

    1/85

    Database System Concepts, 5th Ed.

    Silberschatz, Korth and Sudarshan

    See www.db-book.comfor conditions on re-use

    Storage and File Structure

    CS 3040

    09/10/2009

    http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/
  • 5/26/2018 Storage and File Structure

    2/85

    Silberschatz, Korth and Sudarshan11.2Database System Concepts - 5thEdition, Oct 23, 2005.

    Chapter 11: Storage and File Structure

    Overview of Physical Storage Media

    Magnetic Disks

    RAID

    Tertiary Storage

    Storage Access

    File Organization Organization of Records in Files

    Data-Dictionary Storage

    Storage Structures for Object-Oriented Databases

  • 5/26/2018 Storage and File Structure

    3/85

    Silberschatz, Korth and Sudarshan11.3Database System Concepts - 5thEdition, Oct 23, 2005.

    Classification of Physical Storage Media

    Speed with which data can be accessed

    Cost per unit of data

    Reliability

    data loss on power failure or system crash

    physical failure of the storage device

    Can differentiate storage into:

    volatile storage: loses contents when power is switched off

    non-volatile storage:

    Contents persist even when power is switched off.

    Includes secondary and tertiary storage, as well as batter-backed up main-memory.

  • 5/26/2018 Storage and File Structure

    4/85

    Silberschatz, Korth and Sudarshan11.4Database System Concepts - 5thEdition, Oct 23, 2005.

    Physical Storage Media

    Cachefastest and most costly form of storage; volatile; managedby the computer system hardware.

    Main memory:

    fast access (10s to 100s of nanoseconds; 1 nanosecond = 109seconds)

    generally too small (or too expensive) to store the entiredatabase

    capacities of up to a few Gigabytes widely used currently

    Capacities have gone up and per-byte costs havedecreased steadily and rapidly (roughly factor of 2 every 2to 3 years)

    Volatilecontents of main memory are usually lost if a powerfailure or system crash occurs.

  • 5/26/2018 Storage and File Structure

    5/85

    Silberschatz, Korth and Sudarshan11.5Database System Concepts - 5thEdition, Oct 23, 2005.

    Physical Storage Media (Cont.)

    Flash memory

    Data survives power failure

    Data can be written at a location only once, but location can beerased and written to again

    Can support only a limited number (10K1M) of write/erasecycles.

    Erasing of memory has to be done to an entire bank ofmemory

    Reads are roughly as fast as main memory

    But writes are slow (few microseconds), erase is slower

    Cost per unit of storage roughly similar to main memory Widely used in embedded devices such as digital cameras

    Is a type of EEPROM (Electrically Erasable ProgrammableRead-Only Memory)

  • 5/26/2018 Storage and File Structure

    6/85

    Silberschatz, Korth and Sudarshan11.6Database System Concepts - 5thEdition, Oct 23, 2005.

    Physical Storage Media (Cont.)

    Magnetic-disk

    Data is stored on spinning disk, and read/written magnetically

    Primary medium for the long-term storage of data; typically stores entiredatabase.

    Data must be moved from disk to main memory for access, and writtenback for storage

    Much slower access than main memory (more on this later)

    direct-access possible to read data on disk in any order, unlikemagnetic tape

    Capacities range up to roughly 400 GB currently

    Much larger capacity and cost/byte than main memory/flash memory

    Growing constantly and rapidly with technology improvements (factorof 2 to 3 every 2 years)

    Survives power failures and system crashes

    disk failure can destroy data, but is rare

  • 5/26/2018 Storage and File Structure

    7/85Silberschatz, Korth and Sudarshan11.7Database System Concepts - 5thEdition, Oct 23, 2005.

    Physical Storage Media (Cont.)

    Optical storage

    non-volatile, data is read optically from a spinning disk usinga laser

    CD-ROM (640 MB) and DVD (4.7 to 17 GB) most popularforms

    Write-one, read-many (WORM) optical disks used for archival

    storage (CD-R, DVD-R, DVD+R)

    Multiple write versions also available (CD-RW, DVD-RW,DVD+RW, and DVD-RAM)

    Reads and writes are slower than with magnetic disk

    Juke-boxsystems, with large numbers of removable disks, a

    few drives, and a mechanism for automatic loading/unloadingof disks available for storing large volumes of data

  • 5/26/2018 Storage and File Structure

    8/85Silberschatz, Korth and Sudarshan11.8Database System Concepts - 5thEdition, Oct 23, 2005.

    Physical Storage Media (Cont.)

    Tape storage

    non-volatile, used primarily for backup (to recover from diskfailure), and for archival data

    sequential-accessmuch slower than disk

    very high capacity (40 to 300 GB tapes available)

    tape can be removed from drive storage costs muchcheaper than disk, but drives are expensive

    Tape jukeboxes available for storing massive amounts ofdata

    hundreds of terabytes (1 terabyte = 109 bytes) to even apetabyte (1 petabyte = 1012bytes)

  • 5/26/2018 Storage and File Structure

    9/85Silberschatz, Korth and Sudarshan11.9Database System Concepts - 5thEdition, Oct 23, 2005.

    Storage Hierarchy

  • 5/26/2018 Storage and File Structure

    10/85Silberschatz, Korth and Sudarshan11.10Database System Concepts - 5thEdition, Oct 23, 2005.

    Storage Hierarchy (Cont.)

    primary storage: Fastest media but volatile (cache, main

    memory).

    secondary storage:next level in hierarchy, non-volatile,moderately fast access time

    also called on-line storage

    E.g. flash memory, magnetic disks

    tertiary storage:lowest level in hierarchy, non-volatile, slowaccess time

    also called off-line storage

    E.g. magnetic tape, optical storage

  • 5/26/2018 Storage and File Structure

    11/85Silberschatz, Korth and Sudarshan11.11Database System Concepts - 5thEdition, Oct 23, 2005.

    Magnetic Hard Disk Mechanism

    NOTE: Diagram is schematic, and simplifies the structure of actual disk drives

  • 5/26/2018 Storage and File Structure

    12/85Silberschatz, Korth and Sudarshan11.12Database System Concepts - 5thEdition, Oct 23, 2005.

    Magnetic Disks

    Read-write head

    Positioned very close to the platter surface (almost touching it)

    Reads or writes magnetically encoded information.

    Surface of platter divided into circular tracks

    Over 50K-100K tracks per platter on typical hard disks

    Each track is divided into sectors.

    A sector is the smallest unit of data that can be read or written.

    Sector size typically 512 bytes

    Typical sectors per track: 500 (on inner tracks) to 1000 (on outer tracks)

    To read/write a sector

    disk arm swings to position head on right track

    platter spins continually; data is read/written as sector passes under head

    Head-disk assemblies

    multiple disk platters on a single spindle (1 to 5 usually)

    one head per platter, mounted on a common arm.

    Cylindericonsists of ithtrack of all the platters

  • 5/26/2018 Storage and File Structure

    13/85Silberschatz, Korth and Sudarshan11.13Database System Concepts - 5thEdition, Oct 23, 2005.

    Magnetic Disks (Cont.)

    Earlier generation disks were susceptible to head-crashes

    Surface of earlier generation disks had metal-oxide coatings whichwould disintegrate on head crash and damage all data on disk

    Current generation disks are less susceptible to such disastrousfailures, although individual sectors may get corrupted

    Disk controllerinterfaces between the computer system and the diskdrive hardware.

    accepts high-level commands to read or write a sector initiates actions such as moving the disk arm to the right track and

    actually reading or writing the data

    Computes and attaches checksumsto each sector to verify thatdata is read back correctly

    If data is corrupted, with very high probability stored checksum

    wont match recomputed checksum Ensures successful writing by reading back sector after writing it

    Performs remapping of bad sectors

  • 5/26/2018 Storage and File Structure

    14/85Silberschatz, Korth and Sudarshan11.14Database System Concepts - 5thEdition, Oct 23, 2005.

    Disk Subsystem

    Multiple disks connected to a computer system through a controller

    Controllers functionality (checksum, bad sector remapping) oftencarried out by individual disks; reduces load on controller

    Disk interface standards families ATA(AT adaptor) range of standards

    SATA(Serial ATA)

    SCSI(Small Computer System Interconnect) range of standards

    Several variants of each standard (different speeds and capabilities)

  • 5/26/2018 Storage and File Structure

    15/85Silberschatz, Korth and Sudarshan11.15Database System Concepts - 5thEdition, Oct 23, 2005.

    Performance Measures of Disks

    Access timethe time it takes from when a read or write request is issued towhen data transfer begins. Consists of:

    Seek timetime it takes to reposition the arm over the correct track.

    Average seek time is 1/2 the worst case seek time.

    Would be 1/3 if all tracks had the same number of sectors, and weignore the time to start and stop arm movement

    4 to 10 milliseconds on typical disks

    Rotational latencytime it takes for the sector to be accessed to appearunder the head.

    Average latency is 1/2 of the worst case latency.

    4 to 11 milliseconds on typical disks (5400 to 15000 r.p.m.)

    Data-transfer ratethe rate at which data can be retrieved from or stored tothe disk.

    25 to 100 MB per second max rate, lower for inner tracks

    Multiple disks may share a controller, so rate that controller can handle isalso important

    E.g. ATA-5: 66 MB/sec, SATA: 150 MB/sec, Ultra 320 SCSI: 320 MB/s

    Fiber Channel (FC2Gb): 256 MB/s

  • 5/26/2018 Storage and File Structure

    16/85Silberschatz, Korth and Sudarshan11.16Database System Concepts - 5thEdition, Oct 23, 2005.

    Performance Measures (Cont.)

    Mean time to failure (MTTF)the average time the disk isexpected to run continuously without any failure.

    Typically 3 to 5 years

    Probability of failure of new disks is quite low, corresponding to atheoretical MTTF of 500,000 to 1,200,000 hours for a new disk

    E.g., an MTTF of 1,200,000 hours for a new disk means thatgiven 1000 relatively new disks, on an average one will failevery 1200 hours

    MTTF decreases as disk ages

  • 5/26/2018 Storage and File Structure

    17/85Silberschatz, Korth and Sudarshan11.17Database System Concepts - 5thEdition, Oct 23, 2005.

    Optimization of Disk-Block Access

    Blocka contiguous sequence of sectors from a single track

    data is transferred between disk and main memory in blocks

    sizes range from 512 bytes to several kilobytes

    Smaller blocks: more transfers from disk

    Larger blocks: more space wasted due to partially filled blocks

    Typical block sizes today range from 4 to 16 kilobytes Disk-arm-schedulingalgorithms order pending accesses to tracks so

    that disk arm movement is minimized

    elevator algorithm: move disk arm in one direction (from outer toinner tracks or vice versa), processing next request in that

    direction, till no more requests in that direction, then reversedirection and repeat

  • 5/26/2018 Storage and File Structure

    18/85Silberschatz, Korth and Sudarshan11.18Database System Concepts - 5thEdition, Oct 23, 2005.

    Optimization of Disk Block Access (Cont.)

    File organizationoptimize block access time by organizing the

    blocks to correspond to how data will be accessed E.g. Store related information on the same or nearby cylinders.

    Files may get fragmentedover time

    E.g. if data is inserted to/deleted from the file

    Or free blocks on disk are scattered, and newly created filehas its blocks scattered over the disk

    Sequential access to a fragmented file results in increaseddisk arm movement

    Some systems have utilities to defragmentthe file system, inorder to speed up file access

  • 5/26/2018 Storage and File Structure

    19/85Silberschatz, Korth and Sudarshan11.19Database System Concepts - 5thEdition, Oct 23, 2005.

    Nonvolatile write buffersspeed up disk writes by writing blocks to a non-volatile

    RAM buffer immediately Non-volatile RAM: battery backed up RAM or flash memory

    Even if power fails, the data is safe and will be written to disk when powerreturns

    Controller then writes to disk whenever the disk has no other requests orrequest has been pending for some time

    Database operations that require data to be safely stored before continuing cancontinue without waiting for data to be written to disk

    Writes can be reordered to minimize disk arm movement

    Log diska disk devoted to writing a sequential log of block updates

    Used exactly like nonvolatile RAM

    Write to log disk is very fast since no seeks are required

    No need for special hardware (NV-RAM)

    File systems typically reorder writes to disk to improve performance

    Journaling file systems write data in safe order to NV-RAM or log disk

    Reordering without journaling: risk of corruption of file system data

    Optimization of Disk Block Access (Cont.)

  • 5/26/2018 Storage and File Structure

    20/85Silberschatz, Korth and Sudarshan11.20Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID

    RAID: Redundant Arrays of Independent Disks

    disk organization techniques that manage a large numbers of disks,providing a view of a single disk of

    high capacityand high speed by using multiple disks in parallel,and

    high reliabilityby storing data redundantly, so that data can berecovered even if a disk fails

    The chance that some disk out of a set of Ndisks will fail is much higherthan the chance that a specific single disk will fail.

    E.g., a system with 100 disks, each with MTTF of 100,000 hours(approx. 11 years), will have a system MTTF of 1000 hours (approx.41 days)

    Techniques for using redundancy to avoid data loss are critical withlarge numbers of disks

    Originally a cost-effective alternative to large, expensive disks

    I in RAID originally stood for ``inexpensive

    Today RAIDs are used for their higher reliability and bandwidth.

    The I is interpreted as independent

  • 5/26/2018 Storage and File Structure

    21/85Silberschatz, Korth and Sudarshan11.21Database System Concepts - 5thEdition, Oct 23, 2005.

    Improvement of Reliability via Redundancy

    Redundancystore extra information that can be used to rebuild

    information lost in a disk failure E.g., Mirroring(orshadowing)

    Duplicate every disk. Logical disk consists of two physical disks.

    Every write is carried out on both disks

    Reads can take place from either disk

    If one disk in a pair fails, data still available in the other

    Data loss would occur only if a disk fails, and its mirror disk alsofails before the system is repaired

    Probability of combined event is very small

    Except for dependent failure modes such as fire or buildingcollapse or electrical power surges

    Mean time to data lossdepends on mean time to failure,and mean time to repair

    E.g. MTTF of 100,000 hours, mean time to repair of 10 hours givesmean time to data loss of 500*106hours (or 57,000 years) for amirrored pair of disks (ignoring dependent failure modes)

  • 5/26/2018 Storage and File Structure

    22/85Silberschatz, Korth and Sudarshan11.22Database System Concepts - 5thEdition, Oct 23, 2005.

    Improvement in Performance via Parallelism

    Two main goals of parallelism in a disk system:

    1. Load balance multiple small accesses to increase throughput

    2. Parallelize large accesses to reduce response time.

    Improve transfer rate by striping data across multiple disks.

    Bit-level stripingsplit the bits of each byte across multiple disks

    In an array of eight disks, write bit iof each byte to disk i.

    Each access can read data at eight times the rate of a single disk.

    But seek/access time worse than for a single disk

    Bit level striping is not used much any more

    Block-level stripingwith ndisks, block iof a file goes to disk (imod n) + 1

    Requests for different blocks can run in parallel if the blocks reside ondifferent disks

    A request for a long sequence of blocks can utilize all disks in parallel

  • 5/26/2018 Storage and File Structure

    23/85Silberschatz, Korth and Sudarshan11.23Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID Levels

    Schemes to provide redundancy at lower cost by using disk striping

    combined with parity bits Different RAID organizations, or RAID levels, have differing cost,

    performance and reliability characteristics

    RAID Level 1: Mirrored diskswith block striping

    Offers best write performance.

    Popular for applications such as storing log files in a database system.

    RAID Level 0: Block striping; non-redundant.

    Used in high-performance applications where data lose is not critical.

  • 5/26/2018 Storage and File Structure

    24/85Silberschatz, Korth and Sudarshan11.24Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID Levels (Cont.)

    RAID Level 2: Memory-Style Error-Correcting-Codes(ECC) with bit striping.

    RAID Level 3: Bit-Interleaved Parity a single parity bit is enough for error correction, not just detection, since

    we know which disk has failed

    When writing data, corresponding parity bits must also be computedand written to a parity bit disk

    To recover data in a damaged disk, compute XOR of bits from otherdisks (including parity bit disk)

  • 5/26/2018 Storage and File Structure

    25/85Silberschatz, Korth and Sudarshan11.25Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID Levels (Cont.)

    RAID Level 3 (Cont.)

    Faster data transfer than with a single disk, but fewer I/Os per secondsince every disk has to participate in every I/O.

    Subsumes Level 2 (provides all its benefits, at lower cost).

    RAID Level 4: Block-Interleaved Parity; uses block-level striping, and keepsa parity block on a separate disk for corresponding blocks from Notherdisks.

    When writing data block, corresponding block of parity bits must also becomputed and written to parity disk

    To find value of a damaged block, compute XOR of bits fromcorresponding blocks (including parity block) from other disks.

  • 5/26/2018 Storage and File Structure

    26/85Silberschatz, Korth and Sudarshan11.26Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID Levels (Cont.)

    RAID Level 4 (Cont.)

    Provides higher I/O rates for independent block reads than Level 3 block read goes to a single disk, so blocks stored on different

    disks can be read in parallel

    Provides high transfer rates for reads of multiple blocks than no-striping

    Before writing a block, parity data must be computed

    Can be done by using old parity block, old value of current blockand new value of current block (2 block reads + 2 block writes)

    Or by recomputing the parity value using the new values of blockscorresponding to the parity block

    More efficient for writing large amounts of data sequentially

    Parity block becomes a bottleneck for independent block writes sinceevery block write also writes to parity disk

  • 5/26/2018 Storage and File Structure

    27/85Silberschatz, Korth and Sudarshan11.27Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID Levels (Cont.)

    RAID Level 5: Block-Interleaved Distributed Parity; partitions data and parityamong allN+ 1 disks, rather than storing data in Ndisks and parity in 1 disk.

    E.g., with 5 disks, parity block for nth set of blocks is stored on disk (nmod5) + 1, with the data blocks stored on the other 4 disks.

  • 5/26/2018 Storage and File Structure

    28/85

    Silberschatz, Korth and Sudarshan11.28Database System Concepts - 5thEdition, Oct 23, 2005.

    RAID Levels (Cont.)

    RAID Level 5 (Cont.)

    Higher I/O rates than Level 4.

    Block writes occur in parallel if the blocks and their parityblocks are on different disks.

    Subsumes Level 4: provides same benefits, but avoids bottleneckof parity disk.

    RAID Level 6: P+Q Redundancyscheme; similar to Level 5, butstores extra redundant information to guard against multiple diskfailures.

    Better reliability than Level 5 at a higher cost; not used as widely.

  • 5/26/2018 Storage and File Structure

    29/85

    Silberschatz, Korth and Sudarshan11.29Database System Concepts - 5thEdition, Oct 23, 2005.

    Choice of RAID Level

    Factors in choosing RAID level

    Monetary cost Performance: Number of I/O operations per second, and bandwidth

    during normal operation

    Performance during failure

    Performance during rebuildof failed disk

    Including time taken to rebuild failed disk

    RAID 0 is used only when data safety is not important

    E.g. data can be recovered quickly from other sources

    Level 2 and 4 never used since they are subsumed by 3 and 5

    Level 3 is not used anymore since bit-striping forces single block reads toaccess all disks, wasting disk arm movement, which block striping (level 5)avoids

    Level 6 is rarely used since levels 1 and 5 offer adequate safety for almostall applications

    So competition is between 1 and 5 only

  • 5/26/2018 Storage and File Structure

    30/85

    Silberschatz, Korth and Sudarshan11.30Database System Concepts - 5thEdition, Oct 23, 2005.

    Choice of RAID Level (Cont.)

    Level 1 provides much better write performance than level 5

    Level 5 requires at least 2 block reads and 2 block writes to write a singleblock, whereas Level 1 only requires 2 block writes

    Level 1 preferred for high update environments such as log disks

    Level 1 had higher storage cost than level 5

    disk drive capacities increasing rapidly (50%/year) whereas disk accesstimes have decreased much less (x 3 in 10 years)

    I/O requirements have increased greatly, e.g. for Web servers

    When enough disks have been bought to satisfy required rate of I/O, theyoften have spare storage capacity

    so there is often no extra monetary cost for Level 1!

    Level 5 is preferred for applications with low update rate,

    and large amounts of data

    Level 1 is preferred for all other applications

  • 5/26/2018 Storage and File Structure

    31/85

    Silberschatz, Korth and Sudarshan11.31Database System Concepts - 5thEdition, Oct 23, 2005.

    Hardware Issues

    Software RAID: RAID implementations done entirely in software, with

    no special hardware support

    Hardware RAID: RAID implementations with special hardware

    Use non-volatile RAM to record writes that are being executed

    Beware: power failure during write can result in corrupted disk

    E.g. failure after writing one block but before writing the second

    in a mirrored system Such corrupted data must be detected when power is restored

    Recovery from corruption is similar to recovery from faileddisk

    NV-RAM helps to efficiently detected potentially corruptedblocks

    Otherwise all blocks of disk must be read and comparedwith mirror/parity block

  • 5/26/2018 Storage and File Structure

    32/85

    Silberschatz, Korth and Sudarshan11.32Database System Concepts - 5thEdition, Oct 23, 2005.

    Hardware Issues (Cont.)

    Hot swapping: replacement of disk while system is running,

    without power down Supported by some hardware RAID systems,

    reduces time to recovery, and improves availability greatly

    Many systems maintain spare diskswhich are kept online, andused as replacements for failed disks immediately on detection

    of failure Reduces time to recovery greatly

    Many hardware RAID systems ensure that a single point offailure will not stop the functioning of the system by using

    Redundant power supplies with battery backup

    Multiple controllers and multiple interconnections to guardagainst controller/interconnection failures

  • 5/26/2018 Storage and File Structure

    33/85

    Silberschatz, Korth and Sudarshan11.33Database System Concepts - 5thEdition, Oct 23, 2005.

    Optical Disks

    Compact disk-read only memory (CD-ROM)

    Removable disks, 640 MB per disk Seek time about 100 msec (optical read head is heavier and slower)

    Higher latency (3000 RPM) and lower data-transfer rates (3-6 MB/s)compared to magnetic disks

    Digital Video Disk (DVD)

    DVD-5 holds 4.7 GB , and DVD-9 holds 8.5 GB

    DVD-10 and DVD-18 are double sided formats with capacities of 9.4GB and 17 GB

    Slow seek time, for same reasons as CD-ROM

    Record once versions (CD-R and DVD-R) are popular

    data can only be written once, and cannot be erased.

    high capacity and long lifetime; used for archival storage Multi-write versions (CD-RW, DVD-RW, DVD+RW and DVD-RAM)

    also available

  • 5/26/2018 Storage and File Structure

    34/85

    Silberschatz, Korth and Sudarshan11.34Database System Concepts - 5thEdition, Oct 23, 2005.

    Magnetic Tapes

    Hold large volumes of data and provide high transfer rates

    Few GB for DAT (Digital Audio Tape) format, 10-40 GB with DLT(Digital Linear Tape) format, 100 GB+ with Ultrium format, and 330GB with Ampex helical scan format

    Transfer rates from few to 10s of MB/s

    Currently the cheapest storage medium

    Tapes are cheap, but cost of drives is very high

    Very slow access time in comparison to magnetic disks and opticaldisks

    limited to sequential access.

    Some formats (Accelis) provide faster seek (10s of seconds) atcost of lower capacity

    Used mainly for backup, for storage of infrequently used information,

    and as an off-line medium for transferring information from one systemto another.

    Tape jukeboxes used for very large capacity storage

    (terabyte (1012 bytes) to petabye (1015 bytes)

  • 5/26/2018 Storage and File Structure

    35/85

    Silberschatz, Korth and Sudarshan11.35Database System Concepts - 5thEdition, Oct 23, 2005.

    Storage Access

    A database file is partitioned into fixed-length storage units called

    blocks. Blocks are units of both storage allocation and datatransfer.

    Database system seeks to minimize the number of block transfersbetween the disk and memory. We can reduce the number ofdisk accesses by keeping as many blocks as possible in mainmemory.

    Bufferportion of main memory available to store copies of diskblocks.

    Buffer managersubsystem responsible for allocating bufferspace in main memory.

  • 5/26/2018 Storage and File Structure

    36/85

    Silberschatz, Korth and Sudarshan11.36Database System Concepts - 5thEdition, Oct 23, 2005.

    Buffer Manager

    Programs call on the buffer manager when they need a block

    from disk.1. If the block is already in the buffer, buffer manager returns

    the address of the block in main memory

    2. If the block is not in the buffer, the buffer manager

    1. Allocates space in the buffer for the block

    1. Replacing (throwing out) some other block, if required,to make space for the new block.

    2. Replaced block written back to disk only if it wasmodified since the most recent time that it was writtento/fetched from the disk.

    2. Reads the block from the disk to the buffer, and returnsthe address of the block in main memory to requester.

  • 5/26/2018 Storage and File Structure

    37/85

    Silberschatz, Korth and Sudarshan11.37Database System Concepts - 5thEdition, Oct 23, 2005.

    Buffer-Replacement Policies

    Most operating systems replace the block least recently used

    (LRU strategy) Idea behind LRUuse past pattern of block references as a

    predictor of future references

    Queries have well-defined access patterns (such as sequentialscans), and a database system can use the information in a users

    query to predict future references

    LRU can be a bad strategy for certain access patterns involvingrepeated scans of data

    For example: when computing the join of 2 relations r and sby a nested loopsfor each tuple trof rdo

    for each tuple tsof sdoif the tuples trand tsmatch

    Mixed strategy with hints on replacement strategy providedby the query optimizer is preferable

    B ff R l P li i (C )

  • 5/26/2018 Storage and File Structure

    38/85

    Silberschatz, Korth and Sudarshan11.38Database System Concepts - 5thEdition, Oct 23, 2005.

    Buffer-Replacement Policies (Cont.)

    Pinned blockmemory block that is not allowed to be written

    back to disk. Toss-immediatestrategyfrees the space occupied by a block

    as soon as the final tuple of that block has been processed

    Most recently used (MRU) strategy system must pin the blockcurrently being processed. After the final tuple of that block hasbeen processed, the block is unpinned, and it becomes the mostrecently used block.

    Buffer manager can use statistical information regarding theprobability that a request will reference a particular relation

    E.g., the data dictionary is frequently accessed. Heuristic:keep data-dictionary blocks in main memory buffer

    Buffer managers also support forced outputof blocks for thepurpose of recovery (more in Chapter 17)

    Fil O i i

  • 5/26/2018 Storage and File Structure

    39/85

    Silberschatz, Korth and Sudarshan11.39Database System Concepts - 5thEdition, Oct 23, 2005.

    File Organization

    The database is stored as a collection of files. Each file is a

    sequence of records. A record is a sequence of fields.

    One approach:

    assume record size is fixed

    each file has records of one particular type only

    different files are used for different relationsThis case is easiest to implement; will consider variable lengthrecords later.

    Fi d L th R d

  • 5/26/2018 Storage and File Structure

    40/85

    Silberschatz, Korth and Sudarshan11.40Database System Concepts - 5thEdition, Oct 23, 2005.

    Fixed-Length Records

    Simple approach:

    Store record istarting from byte n (i

    1), where n is the size ofeach record.

    Record access is simple but records may cross blocks

    Modification: do not allow records to cross block boundaries

    Deletion of record i:alternatives:

    move records i+ 1, . . ., nto i, . . . , n 1

    move record n to i

    do not move records, butlink all free records on afree list

    F Li t

  • 5/26/2018 Storage and File Structure

    41/85

    Silberschatz, Korth and Sudarshan11.41Database System Concepts - 5thEdition, Oct 23, 2005.

    Free Lists

    Store the address of the first deleted record in the file header.

    Use this first record to store the address of the second deleted record,and so on

    Can think of these stored addresses as pointerssince they point tothe location of a record.

    More space efficient representation: reuse space for normal attributes

    of free records to store pointers. (No pointers stored in in-use records.)

    V i bl L th R d

  • 5/26/2018 Storage and File Structure

    42/85

    Silberschatz, Korth and Sudarshan11.42Database System Concepts - 5thEdition, Oct 23, 2005.

    Variable-Length Records

    Variable-length records arise in database systems in several

    ways:

    Storage of multiple record types in a file.

    Record types that allow variable lengths for one or morefields.

    Record types that allow repeating fields (used in some

    older data models).

    V i bl L th R d Sl tt d P St t

  • 5/26/2018 Storage and File Structure

    43/85

    Silberschatz, Korth and Sudarshan11.43Database System Concepts - 5thEdition, Oct 23, 2005.

    Variable-Length Records: Slotted Page Structure

    Slotted pageheader contains: number of record entries

    end of free space in the block

    location and size of each record

    Records can be moved around within a page to keep them contiguouswith no empty space between them; entry in the header must beupdated.

    Pointers should not point directly to record instead they shouldpoint to the entry for the record in header.

    O i ti f R d i Fil

  • 5/26/2018 Storage and File Structure

    44/85

    Silberschatz, Korth and Sudarshan11.44Database System Concepts - 5thEdition, Oct 23, 2005.

    Organization of Records in Files

    Heapa record can be placed anywhere in the file where there

    is space Sequentialstore records in sequential order, based on the

    value of the search key of each record

    Hashinga hash function computed on some attribute of eachrecord; the result specifies in which block of the file the record

    should be placed Records of each relation may be stored in a separate file. In a

    multitable clustering file organization records of severaldifferent relations can be stored in the same file

    Motivation: store related records on the same block to

    minimize I/O

    S ti l Fil O i ti

  • 5/26/2018 Storage and File Structure

    45/85

    Silberschatz, Korth and Sudarshan11.45Database System Concepts - 5thEdition, Oct 23, 2005.

    Sequential File Organization

    Suitable for applications that require sequential processing of

    the entire file The records in the file are ordered by a search-key

    Seq ential File Organi ation (Cont )

  • 5/26/2018 Storage and File Structure

    46/85

    Silberschatz, Korth and Sudarshan11.46Database System Concepts - 5thEdition, Oct 23, 2005.

    Sequential File Organization (Cont.)

    Deletionuse pointer chains

    Insertionlocate the position where the record is to be inserted if there is free space insert there

    if no free space, insert the record in an overflow block

    In either case, pointer chain must be updated

    Need to reorganize the filefrom time to time to restoresequential order

    Multitable Clustering File Organization

  • 5/26/2018 Storage and File Structure

    47/85

    Silberschatz, Korth and Sudarshan11.47Database System Concepts - 5thEdition, Oct 23, 2005.

    Multitable Clustering File Organization

    Store several relations in one file using a multitable clustering

    file organization

    M ltit bl Cl t i Fil O i ti ( t )

  • 5/26/2018 Storage and File Structure

    48/85

    Silberschatz, Korth and Sudarshan11.48Database System Concepts - 5thEdition, Oct 23, 2005.

    Multitable Clustering File Organization (cont.)

    Multitable clustering organization of customer and depositor:

    good for queries involving depositor customer, and for queriesinvolving one single customer and his accounts

    bad for queries involving only customer

    results in variable size records

    Can add pointer chains to link records of a particular relation

    Data Dictionary Storage

  • 5/26/2018 Storage and File Structure

    49/85

    Silberschatz, Korth and Sudarshan11.49Database System Concepts - 5thEdition, Oct 23, 2005.

    Data Dictionary Storage

    Information about relations

    names of relations

    names and types of attributes of each relation

    names and definitions of views

    integrity constraints User and accounting information, including passwords

    Statistical and descriptive data

    number of tuples in each relation

    Physical file organization information

    How relation is stored (sequential/hash/)

    Physical location of relation

    Information about indices (Chapter 12)

    Data dictionary(also called system catalog) stores metadata;

    that is, data about data, such as

    Data Dictionary Storage (Cont )

  • 5/26/2018 Storage and File Structure

    50/85

    Silberschatz, Korth and Sudarshan11.50Database System Concepts - 5thEdition, Oct 23, 2005.

    Data Dictionary Storage (Cont.)

    Catalog structure

    Relational representation on disk specialized data structures designed for efficient access, in

    memory

    A possible catalog representation:

    Relation_metadata = (relation_name, number_of_attributes,storage_organization, location)

    Attribute_metadata = (attribute_name, relation_name, domain_type,position, length)

    User_metadata = (user_name, encrypted_password, group)Index_metadata = (index_name, relation_name, index_type,

    index_attributes)View_metadata = (view_name, definition)

  • 5/26/2018 Storage and File Structure

    51/85

    Database System Concepts, 5th Ed.

    Silberschatz, Korth and SudarshanSee www.db-book.comfor conditions on re-use

    End of Chapter 11

    Record Representation

    http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/
  • 5/26/2018 Storage and File Structure

    52/85

    Silberschatz, Korth and Sudarshan11.52Database System Concepts - 5thEdition, Oct 23, 2005.

    Record Representation

    Records with fixed length fields are easy to represent

    Similar to records (structs) in programming languages Extensions to represent null values

    E.g. a bitmap indicating which attributes are null

    Variable length fields can be represented by a pair(offset,length)

    where offset is the location within the record and length is field length.

    All fields start at predefined location, but extra indirection requiredfor variable length fields

    Example record structure of accountrecord

    account_number

    branch_name

    balance

    PerryridgeA-102 40010

    File Containing account Records

  • 5/26/2018 Storage and File Structure

    53/85

    Silberschatz, Korth and Sudarshan11.53Database System Concepts - 5thEdition, Oct 23, 2005.

    File Containing account Records

    File of Figure 11.6, with Record 2 Deleted and

  • 5/26/2018 Storage and File Structure

    54/85

    Silberschatz, Korth and Sudarshan11.54Database System Concepts - 5thEdition, Oct 23, 2005.

    g ,All Records Moved

    File of Figure 11.6, With Record 2 deleted and

  • 5/26/2018 Storage and File Structure

    55/85

    Silberschatz, Korth and Sudarshan11.55Database System Concepts - 5thEdition, Oct 23, 2005.

    Final Record Moved

    y e- r ng epresen a on o ar a e- engR d

  • 5/26/2018 Storage and File Structure

    56/85

    Silberschatz, Korth and Sudarshan11.56Database System Concepts - 5thEdition, Oct 23, 2005.

    Records

    Clustering File Structure

  • 5/26/2018 Storage and File Structure

    57/85

    Silberschatz, Korth and Sudarshan11.57Database System Concepts - 5thEdition, Oct 23, 2005.

    Clustering File Structure

    Clustering File Structure With Pointer Chains

  • 5/26/2018 Storage and File Structure

    58/85

    Silberschatz, Korth and Sudarshan11.58Database System Concepts - 5thEdition, Oct 23, 2005.

    Clustering File Structure With Pointer Chains

    The deposi tor Relation

  • 5/26/2018 Storage and File Structure

    59/85

    Silberschatz, Korth and Sudarshan11.59Database System Concepts - 5thEdition, Oct 23, 2005.

    The deposi torRelation

    The customer Relation

  • 5/26/2018 Storage and File Structure

    60/85

    Silberschatz, Korth and Sudarshan11.60Database System Concepts - 5thEdition, Oct 23, 2005.

    The customer Relation

    Clustering File Structure

  • 5/26/2018 Storage and File Structure

    61/85

    Silberschatz, Korth and Sudarshan11.61Database System Concepts - 5thEdition, Oct 23, 2005.

    Clustering File Structure

  • 5/26/2018 Storage and File Structure

    62/85

    Silberschatz, Korth and Sudarshan11.62Database System Concepts - 5thEdition, Oct 23, 2005.

    Figure 11 4

  • 5/26/2018 Storage and File Structure

    63/85

    Silberschatz, Korth and Sudarshan11.63Database System Concepts - 5thEdition, Oct 23, 2005.

    Figure 11.4

    Figure 11 7

  • 5/26/2018 Storage and File Structure

    64/85

    Silberschatz, Korth and Sudarshan11.64Database System Concepts - 5thEdition, Oct 23, 2005.

    Figure 11.7

    Figure 11.8

  • 5/26/2018 Storage and File Structure

    65/85

    Silberschatz, Korth and Sudarshan11.65Database System Concepts - 5thEdition, Oct 23, 2005.

    Figure 11.8

    Figure 11.100

  • 5/26/2018 Storage and File Structure

    66/85

    Silberschatz, Korth and Sudarshan11.66Database System Concepts - 5thEdition, Oct 23, 2005.

    Figure 11.100

    Figure 11.20

  • 5/26/2018 Storage and File Structure

    67/85

    Silberschatz, Korth and Sudarshan11.67Database System Concepts - 5thEdition, Oct 23, 2005.

    Figure 11.20

    Byte-String Representation of Variable-Length Records

  • 5/26/2018 Storage and File Structure

    68/85

    Silberschatz, Korth and Sudarshan11.68Database System Concepts - 5thEdition, Oct 23, 2005.

    y g p g

    Byte string representationAttach an end-of-record() control character to the end of each recordDifficulty with deletion

    Difficulty with growth

    Fixed-Length Representation

  • 5/26/2018 Storage and File Structure

    69/85

    Silberschatz, Korth and Sudarshan11.69Database System Concepts - 5thEdition, Oct 23, 2005.

    Fixed Length Representation

    Use one or more fixed length records:

    reserved space pointers

    Reserved spacecan use fixed-length records of a knownmaximum length; unused space in shorter records filled with a nullor end-of-record symbol.

    Pointer Method

  • 5/26/2018 Storage and File Structure

    70/85

    Silberschatz, Korth and Sudarshan11.70Database System Concepts - 5thEdition, Oct 23, 2005.

    Pointer Method

    Pointer method A variable-length record is represented by a list of fixed-length

    records, chained together via pointers.

    Can be used even if the maximum record length is not known

    Pointer Method (Cont )

  • 5/26/2018 Storage and File Structure

    71/85

    Silberschatz, Korth and Sudarshan11.71Database System Concepts - 5thEdition, Oct 23, 2005.

    Pointer Method (Cont.)

    Disadvantage to pointer structure; space is wasted in allrecords except the first in a a chain.

    Solution is to allow two kinds of block in file:

    Anchor blockcontains the first records of chain

    Overflow blockcontains records other than those thatare the first records of chairs.

    Mapping of Objects to Files

  • 5/26/2018 Storage and File Structure

    72/85

    Silberschatz, Korth and Sudarshan11.72Database System Concepts - 5thEdition, Oct 23, 2005.

    pp g Obj

    Mapping objects to files is similar to mapping tuples to files in arelational system; object data can be stored using file structures.

    Objects in O-O databases may lack uniformity and may be very large;such objects have to managed differently from records in a relationalsystem.

    Set fields with a small number of elements may be implementedusing data structures such as linked lists.

    Set fields with a larger number of elements may be implemented asseparate relations in the database.

    Set fields can also be eliminated at the storage level bynormalization.

    Similar to conversion of multivalued attributes of E-R diagrams torelations

    Mapping of Objects to Files (Cont.)

  • 5/26/2018 Storage and File Structure

    73/85

    Silberschatz, Korth and Sudarshan11.73Database System Concepts - 5thEdition, Oct 23, 2005.

    pp g j ( )

    Objects are identified by an object identifier (OID); the storage systemneeds a mechanism to locate an object given its OID (this action iscalled dereferencing).

    logical identifiersdo not directly specify an objects physicallocation; must maintain an index that maps an OID to the objects

    actual location.

    physical identifiersencode the location of the object so the

    object can be found directly. Physical OIDs typically have thefollowing parts:

    1. a volume or file identifier

    2. a page identifier within the volume or file

    3. an offset within the page

    Management of Persistent Pointers

  • 5/26/2018 Storage and File Structure

    74/85

    Silberschatz, Korth and Sudarshan11.74Database System Concepts - 5thEdition, Oct 23, 2005.

    g

    Physical OIDs may be a unique identifier. This identifier isstored in the object also and is used to detect references viadangling pointers.

    Management of Persistent Pointers(Cont )

  • 5/26/2018 Storage and File Structure

    75/85

    Silberschatz, Korth and Sudarshan11.75Database System Concepts - 5thEdition, Oct 23, 2005.

    (Cont.)

    Implement persistent pointers using OIDs; persistent pointers aresubstantially longer than are in-memory pointers

    Pointer swizzling cuts down on cost of locating persistent objectsalready in-memory.

    Software swizzling (swizzling on pointer deference)

    When a persistent pointer is first dereferenced, the pointer is

    swizzled(replaced by an in-memory pointer) after the object islocated in memory.

    Subsequent dereferences of of the same pointer become cheap.

    The physical location of an object in memory must not change ifswizzled pointers pont to it; the solution is to pin pages in memory

    When an object is written back to disk, any swizzled pointers itcontains need to be unswizzled.

    Hardware Swizzling

  • 5/26/2018 Storage and File Structure

    76/85

    Silberschatz, Korth and Sudarshan11.76Database System Concepts - 5thEdition, Oct 23, 2005.

    g

    With hardware swizzling, persistent pointers in objects need thesame amount of space as in-memory pointers extra storageexternal to the object is used to store rest of pointer information.

    Uses virtual memory translation mechanism to efficiently andtransparently convert between persistent pointers and in-memorypointers.

    All persistent pointers in a page are swizzled when the page is

    first read in.

    thus programmers have to work with just one type of pointer,i.e., in-memory pointer.

    some of the swizzled pointers may point to virtual memoryaddresses that are currently not allocated any real memory

    (and do not contain valid data)

    Hardware Swizzling

  • 5/26/2018 Storage and File Structure

    77/85

    Silberschatz, Korth and Sudarshan11.77Database System Concepts - 5thEdition, Oct 23, 2005.

    g

    Persistent pointer is conceptually split into two parts: a page identifier,and an offset within the page.

    The page identifier in a pointer is a short indirect pointer: Eachpage has a translation table that provides a mapping from theshort page identifiers to full database page identifiers.

    Translation table for a page is small (at most 1024 pointers in a4096 byte page with 4 byte pointer)

    Multiple pointers in page to the same page share same entry inthe translation table.

    Hardware Swizzling (Cont.)

  • 5/26/2018 Storage and File Structure

    78/85

    Silberschatz, Korth and Sudarshan11.78Database System Concepts - 5thEdition, Oct 23, 2005.

    g ( )

    Page image before swizzling (page located on disk)

    Hardware Swizzling (Cont.)

  • 5/26/2018 Storage and File Structure

    79/85

    Silberschatz, Korth and Sudarshan11.79Database System Concepts - 5thEdition, Oct 23, 2005.

    g ( )

    When system loads a page into memory the persistent pointers in the pageare swizzledas described below

    1. Persistent pointers in each object in the page are located using objecttype information

    2. For each persistent pointer (pi, oi) find its full page ID Pi

    1. If Pidoes not already have a virtual memory page allocated to it,

    allocate a virtual memory page to Pi and read-protect the page Note: there need not be any physical space (whether in memory

    or on disk swap-space) allocated for the virtual memory page atthis point. Space can be allocated later if (and when) Pi isaccessed. In this case read-protection is not required.

    Accessing a memory location in the page in the will result in asegmentation violation, which is handled as described later

    2. Let vibe the virtual page allocated to Pi(either earlier or above)3. Replace (pi, oi) by (vi, oi)

    3. Replace each entry (pi, Pi) in the translation table, by (vi, Pi)

    Hardware Swizzling (Cont.)

  • 5/26/2018 Storage and File Structure

    80/85

    Silberschatz, Korth and Sudarshan11.80Database System Concepts - 5thEdition, Oct 23, 2005.

    g ( )

    When an in-memory pointer is dereferenced, if the operatingsystem detects the page it points to has not yet been allocatedstorage, or is read-protected, a segmentation violationoccurs.

    The mmap() call in Unix is used to specify a function to be invokedon segmentation violation

    The function does the following when it is invoked

    1.Allocate storage (swap-space) for the page containing thereferenced address, if storage has not been allocated earlier.Turn off read-protection

    2. Read in the page from disk

    3. Perform pointer swizzling for each persistent pointer in thepage, as described earlier

    Hardware Swizzling (Cont.)

  • 5/26/2018 Storage and File Structure

    81/85

    Silberschatz, Korth and Sudarshan11.81Database System Concepts - 5thEdition, Oct 23, 2005.

    g

    Page with short page identifier 2395 was allocated address 5001.Observe change in pointers and translation table.

    Page with short page identifier 4867 has been allocated address4867. Nochange in pointer and translation table.

    Page image after swizzling

    Hardware Swizzling (Cont.)

  • 5/26/2018 Storage and File Structure

    82/85

    Silberschatz, Korth and Sudarshan11.82Database System Concepts - 5thEdition, Oct 23, 2005.

    After swizzling, all short page identifiers point to virtual memory addressesallocated for the corresponding pages

    functions accessing the objects are not even aware that it haspersistent pointers, and do not need to be changed in any way!

    can reuse existing code and libraries that use in-memory pointers

    After this, the pointer dereference that triggered the swizzling can continue

    Optimizations: If all pages are allocated the same address as in the short page

    identifier, no changes required in the page!

    No need for deswizzling swizzled page can be saved as-is to disk

    A set of pages (segment) can share one translation table. Pages can

    still be swizzled as and when fetched (old copy of translation table isneeded).

    A process should not access more pages than size of virtual memory reuse of virtual memory addresses for other pages is expensive

    Disk versus Memory Structure of Objects

  • 5/26/2018 Storage and File Structure

    83/85

    Silberschatz, Korth and Sudarshan11.83Database System Concepts - 5thEdition, Oct 23, 2005.

    The format in which objects are stored in memory may be different fromthe formal in which they are stored on disk in the database. Reasonsare:

    software swizzlingstructure of persistent and in-memory pointersare different

    database accessible from different machines, with different datarepresentations

    Make the physical representation of objects in the databaseindependent of the machine and the compiler.

    Can transparently convert from disk representation to form requiredon the specific machine, language, and compiler, when the object(or page) is brought into memory.

    Large Objects

  • 5/26/2018 Storage and File Structure

    84/85

    Silberschatz, Korth and Sudarshan11.84Database System Concepts - 5thEdition, Oct 23, 2005.

    Large objects : binary large objects (blobs) and character largeobjects (clobs)

    Examples include:

    text documents

    graphical data such as images and computer aided designsaudio and video data

    Large objects may need to be stored in a contiguous sequence ofbytes when brought into memory.

    If an object is bigger than a page, contiguous pages of the bufferpool must be allocated to store it.

    May be preferable to disallow direct access to data, and only allow

    access through a file-system-like API, to remove need forcontiguous storage.

    Modifying Large Objects

  • 5/26/2018 Storage and File Structure

    85/85

    If the application requires insert/delete of bytes from specified regions ofan object:

    B+-tree file organization (described later in Chapter 12) can bemodified to represent large objects

    Each leaf page of the tree stores between half and 1 page worth ofdata from the object

    Special-purpose application programs outside the database are used tomanipulate large objects:

    Text data treated as a byte string manipulated by editors andformatters.

    Graphical data and audio/video data is typically created and displayedby separate application

    checkout/checkinmethod for concurrency control and creation ofversions