topic denormalisation s mckeever advanced databases 1
Post on 19-Dec-2015
215 views
TRANSCRIPT
Topic Denormalisation
S McKeeverAdvanced Databases
1
Advanced Databases
The result of normalisation is a logical database design that is structurally consistent and has minimal redundancy.
So it’s all perfect. Is it?
S McKeeverAdvanced Databases
3
•Does it ever make sense to deliberately relax normalisation rules and deliberately introduce redundancy into the system.
Question?
S McKeever
4
The answer is yes, but only when it is estimated that the system may not be able to meet its performance requirements.
Answer
S McKeever
5
A fully normalised system does not necessarily provide maximum processing efficiency. In this situation introducing redundancy in a controlled manner by relaxing the normalisation rules will improve the performance of the system.
Denormalisation
S McKeever
6
When we talk about denormalisation we are not just talking about forms.
For example, we may decide to have some portion of the logical data model in 2NF and the rest in 3NF.
In general we are loosely using the term to refer to situations where we combine relations and the new relation is still normalised but may contain nulls but there are other techniques
Denormalisation
S McKeever
7
Often in normalisation, you split a table into two tables (like our various examples)
Then when you need data from both tables, you need to access two tables instead of one.
In some situations we may decide to leave the relations in 2NF – because it’s faster.
Denormalisation
S McKeever
8
Normalisation is still very important for database design.
In addition the following factors have to be considered:
Denormalisation makes implementation more complex.
Denormalisation often sacrifices flexibility Denormalisation may speed up retrievals but it
slows down updates.
Denormalisation
Denormalisation techniques
S McKeeverAdvanced Databases 9
• Denormalisation uses various techniques• but techniques used will depend upon on usage
of the database)• Consider the use of these techniques for
frequent or critical transactions:
S McKeeverAdvanced Databases 10
Consider the introduction of controlled redundancy (denormalisation)
Some techniques
1 Combining 1:1 relationships
2 Duplicating non-key attributes in 1:* relationships to reduce joins
3 Duplicating attributes in *:* relationships to reduce joins
4 Introducing repeating groups5 Partitioning relations into smaller chunks.
Sample Relation Diagram
S McKeeverAdvanced Databases 11
Sample Relations
S McKeeverAdvanced Databases 12
1 Combining 1:1 relationships
S McKeeverAdvanced Databases 13
Queries on the interview details for a client are very frequent in this database. Two separate tables. So combining the two tables will be faster (it’s faster to query one table than to join two)
2 Duplicating non-key attributes in 1:* relationships: Lookup Table
S McKeeverAdvanced Databases 14
This table shows all properties available for rent. Have to go to lookup table to “translate” type
2 Duplicating non-key attributes in 1:* relationships: Lookup Table duplicated
S McKeeverAdvanced Databases 15
3 Duplicating attributes in *:* relationships to reduce joins
S McKeeverAdvanced Databases 16
When you’re using an intermediate or bridging table, can perhaps add duplicate attributes to the bridging table to speed up queries
e.g. Furniture store: Orders table linked to order items, Which is linked to products
How do I show all orders for products of type chair? Add product type to order_items table for speed.
4 Introducing repeating groups
S McKeeverAdvanced Databases 17
But only where there is a limited number of groups e.g. max of three telephone numbers
5 Partitioning relations
S McKeeverAdvanced Databases 18
Rather than combining relations together, alternative approach is to decompose them into a number of smaller and more manageable partitions
Two main types of partitioning: horizontal and vertical.
Very important technique for performance tuning
Don’t give me such a big
table to search…
divide it up to help me
How do I decide what to chop the table into?
5 Partitioning relations
S McKeeverAdvanced Databases 19
Handy if there’s a natural split e.g.
Customers in Dublin in one table, other
customers on a 2nd table
VerticalMaybe some
columns aren’t used much
S McKeeverAdvanced Databases 20
Supposing..• Mobile phone company has a customer service
system. All service calls logged.• Transaction description: When a phone call is
received, the customer service clerk usually searches the database for a customer call for a specific day. Search for a customer’s calls for a specific day.
• There are 1M customers. Each makes on average 2 calls per day
• 2X365M records added per year• It could take days to query calls for a specific customer• Use partitioning?
5 Partitioning relations
S McKeeverAdvanced Databases
21
Using Partitioning
Partition the customer_calls table into separate partitions per customer?
Per day? – this one is easier because you don’t have to keep “old” partitions up to date
Search space will go from M+ records to 2M..
5 Partitioning relations
S McKeeverAdvanced Databases
22
1. Smaller and more manageable pieces of data ( Partitions )2. Reduced recovery time3. Failure impact is less4. import / export can be done at the " Partition Level".5. Faster access of data6. Partitions work independent of the other partitions.7. Very easy to use
5 Partitioning relations
S McKeever
23
We’ve looked at techniques and concepts
ERDsentities, relationships, cardinalities
NormalisationDenormalisation
Database Design
S McKeever
24
Conceptual Logical Physical
Database design phases
Normalisation done in this step
DenormalisationIn this step
DBMS must be knownDBMS not necessarily known
S McKeever
25
Design methodology defines 3 phases
ConceptualThe process of constructing a model of the data used in an enterprise independent of all physical considerations
LogicalThe process of constructing a model of the data used in an enterprise based on a specific data model, but independent of a particular DMBS and other physical considerations
PhysicalThe process of producing a description of the implementation of the database on secondary storage; it describes the base relations, file organisations and indexes used to achieve efficient access to the data, and any associated integrity constraints and security measures.
What?
How?
S McKeever
26
Summary
Sometimes after normalising, need to revisit design in order to improve performance
Denormalisation investigated for frequent or critical transactions
Various techniques