2011 db distributed

Post on 24-Jan-2018

900 Views

Category:

Technology

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Distributed TransactionsAlan Medlar

amedlar@cs.ucl.ac.uk

Motivation

• Distributed Database

• collection of sites, each with own database

• each site processes local transactions

• local transactions can only access local database

• Distributed transactions require co-ordination among sites

Advantages

• Distributed databases can improve availability (especially if we are using database replication)

• Parallel processing of sub-transactions at individual sites instead of all locally improves performance

Disadvantages

• Cost: hardware, software dev, network (leased lines?)

• Operational Overhead: network traffic, co-ordination overhead

• Technical: harder to debug, security, greater complexity

• ACID properties harder to achieve

Main Issues• Transparency: database provides abstraction layer above

data access, distributed databases should be accessed in the same way

• Distributed Transactions: local transactions are only processed at one site, global transactions need to preserve ACID across multiple sites and provide distributed query processing (eg: distributed join)

• Atomicity: all sites in a global transactions must commit or none do

• Consistency: all schedules must be conflict serializable (last lecture!)

Failures• Site failures: exactly the same as for local databases

(hardware failure, out of memory etc)

• Networking failures

• Failure of a network link: no hope of communicating with other database site

• Loss of messages: network link might be fine, but congested, packet loss, TCP timeouts

• Network partition: more relevant to replication, set of replicas might be divided in two, updating only replicas in their partition

Fragmentation

• Divide a relation into sections which can be allocated to different sites to optimise (reduce processing time, network traffic overhead) transaction processing

• Horizontal and vertical fragmentation

Customer BalanceAccount noBranch

Alice 2001234Euston

Bob 1002345Euston

Eve 53456Euston

Richard 5504567Harrow

Jane 755678Harrow

Graham 1756789Harrow

Customer BalanceAccount noBranch

Alice 2001234Euston

Bob 1002345Euston

Eve 53456Euston

Richard 5504567Harrow

Jane 755678Harrow

Graham 1756789Harrow

Customer BalanceAccount noBranch

Horizontal Fragmentation(in this case taking advantage of usage locality)

Customer

Alice

Bob

Eve

Richard

Jane

Graham

Balance

200

100

5

550

75

175

Account no

1234

2345

3456

4567

5678

6789

Branch

Euston

Euston

Euston

Harrow

Harrow

Harrow

Customer

Alice

Bob

Eve

Richard

Jane

Graham

Balance

200

100

5

550

75

175

Account no

1234

2345

3456

4567

5678

6789

Branch

Euston

Euston

Euston

Harrow

Harrow

Harrow

Id

0

1

2

3

4

5

Id

0

1

2

3

4

5

Vertical FragmentationAdditional Id-tuple allows for a join to recreate the

original relation

Problem

• Now our data is split into fragments and each fragment is at a separate site

• How do we access these sites using transactions, whilst maintaining the ACID properties?

2-Phase Commit

• Distributed algorithm that permits all nodes in a distributed system to agree to commit a transaction, the protocol results in all sites committing or aborting

• Completes despite network or node failures

• Necessary to provide atomicity

2-Phase Commit

• Voting Phase: each site is polled as to whether a transactions should commit (ie: whether their sub-transaction can commit)

• Decision Phase: if any site says “abort” or does not reply, then all sites must be told to abort

• Logging is performed for failure recovery (as usual)

client

client

TC

client

TC

A B

client

TC

A B

start

client

TC

A B

start

prepare

client

TC

A B

start

prepareprepare

client

TC

A B

start

prepareprepare ready

client

TC

A B

start

prepareprepare ready

ready

client

TC

A B

commit commit

start

prepareprepare ready

ready

client

TC

A B

commit commit

startOK

prepareprepare ready

ready

Voting Phase

• TC (transaction co-ordinator) writes <prepare Ti> to log

• TC sends prepare message to all sites (A,B)

• Site’s local DBMS decides whether to commit its part of the transaction or abort. If commit write <ready Ti> else <no Ti> to log

• Ready or abort message sent back to TC

Decision Phase

• After receiving all results from prepare messages (or after a timeout) TC can decision whether the entire transaction should commit

• If any site replies “abort” or timed out, TC aborts the entire transaction by logging <abort Ti> and then sending the “abort” message to all sites

• If all sites replies with “ready”, TC commits by logging <commit Ti> and sending commit message to all sites

• Upon receipt of a commit message, each site logs <commit Ti> and only then alters the database in memory

Failure Example 1• One of the database sites (A,B) fails

• On recovery the log is examined:

• if log contains <commit Ti>, redo the changes of the transaction

• if the log contains <abort Ti>, undo the changes

• if the log contains <ready Ti>, but not a commit, contact TC for the outcome of transaction Ti, if TC is down, then other sites

• if log does not contain ready, commit or abort then the failure must have occurred before the receipt of “prepare Ti”, so TC would have aborted the transaction

Failure Example 2• One of the transaction coordinator (TC) fails (sites A or B waiting

for commit/abort message)

• Each database site log is examined:

• if any site log contains <commit Ti> Ti must be committed at all sites

• if any site log contains <abort Ti> or <no Ti> Ti must be aborted at all sites

• if any site log does not contain <ready Ti>, TC must have failed before decision to commit

• if none of the above apply then all active sites must have <ready Ti> (but no additional commits or aborts), TC must be consulted (when it comes back online)

Network Faults

• Failure of the network

• From the perspective of entities on one side of the network failure, entities on the other side have failed (apply previous strategies)

Locking (non-replicated system)

• Each local site has a lock manager

• administers lock requests for data items stored at site

• when a transactions requires a data item to be locked, it requests a lock from the lock manager

• lock manager blocks until lock can be held

• Problem: deadlocks in a distributed system, clearly more complicated to resolve...

Locking (single co-ordinator)

• Have a single lock manager for the whole distributed database

• manages locks at all sites

• locks for reading of any replica

• locks for writing of all replicas

• Simpler deadlock handling

• Single point of failure

• Bottleneck?

Locking (replicated system)

• Majority protocol where each local site has a lock manager

• Transactions wants a lock on a data item that is replicated at n sites

• must get a lock for that data item at more than n/2 sites

• transaction cannot operate until it has locks on more than half of the replica sites (only one transaction can do this at a time)

• if replicas are written to all replicas must be updated...

Updating Replicas

• Replication makes reading more reliable (probability p that a replica is unavailable, the probability that all n replicas are unavailable is pn)

• Replication makes writing less reliable (the probability of all n replicas being available to be updated with a write has a probability (1-p)n)

• Writing must succeed even if not all replicas are available...

Updating Replicas (2)

• Majority update protocol!

• Update more than half of the replicas (the rest have “failed”, can be updated later), but this time add a timestamp or version number

• To read a data item, read more than half of the replicas and use the one with the most recent timestamp

• Write more reliable, reading more complex!

~ Fin ~(Graphics lectures begin on Monday 9th March)

top related