data mining: concepts and techniques
DESCRIPTION
Data Mining: Concepts and Techniques. These slides have been adapted from Han, J., Kamber, M., & Pei, Y. Data Mining: Concepts and Technique. Chapter 4: Data Cube Technology. Efficient Computation of Data Cubes Exploration and Discovery in Multidimensional Databases Summary. - PowerPoint PPT PresentationTRANSCRIPT
04/21/23Data Mining: Concepts and Technique
s 1
Data Mining: Concepts and
Techniques
These slides have been adapted from Han, J., Kamber, M., & Pei, Y. Data Mining: Concepts and Technique.
04/21/23Data Mining: Concepts and Technique
s 2
Chapter 4: Data Cube Technology
Efficient Computation of Data Cubes
Exploration and Discovery in
Multidimensional Databases
Summary
04/21/23Data Mining: Concepts and Technique
s 3
Efficient Computation of Data Cubes
General heuristics
Multi-way array aggregation
BUC
H-cubing
Star-Cubing
High-Dimensional OLAP
04/21/23Data Mining: Concepts and Technique
s 4
Cube: A Lattice of Cuboids
time,item
time,item,location
time, item, location, supplier
all
time item location supplier
time,location
time,supplier
item,location
item,supplier
location,supplier
time,item,supplier
time,location,supplier
item,location,supplier
0-D(apex) cuboid
1-D cuboids
2-D cuboids
3-D cuboids
4-D(base) cuboid
04/21/23Data Mining: Concepts and Technique
s 5
Preliminary Tricks
Sorting, hashing, and grouping operations are applied to the dimension attributes in order to reorder and cluster related tuples
Aggregates may be computed from previously computed aggregates, rather than from the base fact table
Smallest-child: computing a cuboid from the smallest, previously computed cuboid
Cache-results: caching results of a cuboid from which other cuboids are computed to reduce disk I/Os
Amortize-scans: computing as many as possible cuboids at the same time to amortize disk reads
Share-sorts: sharing sorting costs cross multiple cuboids when sort-based method is used
Share-partitions: sharing the partitioning cost across multiple cuboids when hash-based algorithms are used
04/21/23Data Mining: Concepts and Technique
s 6
Efficient Computation of Data Cubes
General heuristics
Multi-way array aggregation
BUC
H-cubing
Star-Cubing
High-Dimensional OLAP
04/21/23Data Mining: Concepts and Technique
s 7
Multi-Way Array AggregationMulti-Way Array Aggregation
Array-based “bottom-up” algorithm
Using multi-dimensional chunks No direct tuple comparisons Simultaneous aggregation on
multiple dimensions Intermediate aggregate values
are re-used for computing ancestor cuboids
Cannot do Apriori pruning: No iceberg optimization
a ll
A B
A B
A B C
A C B C
C
04/21/23Data Mining: Concepts and Technique
s 8
Multi-way Array Aggregation for Cube Computation (MOLAP)
Partition arrays into chunks (a small subcube which fits in memory). Compressed sparse array addressing: (chunk_id, offset) Compute aggregates in “multiway” by visiting cube cells in the order
which minimizes the # of times to visit each cell, and reduces memory access and storage cost.
What is the best traversing order to do multi-way aggregation?
A
B
29 30 31 32
1 2 3 4
5
9
13 14 15 16
6463626148474645
a1a0
c3c2
c1c 0
b3
b2
b1
b0
a2 a3
C
B
4428 56
4024 52
3620
60
04/21/23Data Mining: Concepts and Technique
s 9
Multi-way Array Aggregation for Cube Computation
A
B
29 30 31 32
1 2 3 4
5
9
13 14 15 16
6463626148474645
a1a0
c3c2
c1c 0
b3
b2
b1
b0
a2 a3
C
4428 56
4024 52
3620
60
B
04/21/23Data Mining: Concepts and Technique
s 10
Multi-way Array Aggregation for Cube Computation
A
B
29 30 31 32
1 2 3 4
5
9
13 14 15 16
6463626148474645
a1a0
c3c2
c1c 0
b3
b2
b1
b0
a2 a3
C
4428 56
4024 52
3620
60
B
04/21/23Data Mining: Concepts and Technique
s 11
Multi-Way Array Aggregation for Cube Computation (Cont.)
Method: the planes should be sorted and computed according to their size in ascending order Idea: keep the smallest plane in the main
memory, fetch and compute only one chunk at a time for the largest plane
Limitation of the method: computing well only for a small number of dimensions If there are a large number of dimensions, “top-
down” computation and iceberg cube computation methods can be explored
04/21/23Data Mining: Concepts and Technique
s 12
Efficient Computation of Data Cubes
General heuristics
Multi-way array aggregation
BUC
H-cubing
Star-Cubing
High-Dimensional OLAP
04/21/23Data Mining: Concepts and Technique
s 13
Bottom-Up Computation (BUC)
BUC Bottom-up cube computation
(Note: top-down in our view!) Divides dimensions into
partitions and facilitates iceberg pruning If a partition does not
satisfy min_sup, its descendants can be pruned
If minsup = 1 compute full CUBE!
No simultaneous aggregation
a ll
A B C
A C B C
A B C A B D A C D B C D
A D B D C D
D
A B C D
A B
1 a ll
2 A 1 0 B 1 4 C
7 A C 1 1 B C
4 A B C 6 A B D 8 A C D 1 2 B C D
9 A D 1 3 B D 1 5 C D
1 6 D
5 A B C D
3 A B
04/21/23Data Mining: Concepts and Technique
s 14
BUC: Partitioning Usually, entire data set
can’t fit in main memory Sort distinct values, partition into blocks that fit Continue processing Optimizations
Partitioning External Sorting, Hashing, Counting Sort
Ordering dimensions to encourage pruning Cardinality, Skew, Correlation
Collapsing duplicates Can’t do holistic aggregates anymore!
04/21/23Data Mining: Concepts and Technique
s 15
Efficient Computation of Data Cubes
General heuristics
Multi-way array aggregation
BUC
H-cubing
Star-Cubing
High-Dimensional OLAP
04/21/23Data Mining: Concepts and Technique
s 16
H-Cubing: Using H-Tree StructureH-Cubing: Using H-Tree Structure
Bottom-up computation Exploring an H-tree
structure If the current
computation of an H-tree cannot pass min_sup, do not proceed further (pruning)
No simultaneous aggregation
a ll
A B C
A C B C
A B C A B D A C D B C D
A D B D C D
D
A B C D
A B
04/21/23Data Mining: Concepts and Technique
s 17
H-tree: A Prefix Hyper-tree
Month CityCust_gr
pProd Cost Price
Jan Tor Edu Printer 500 485
Jan Tor Hhd TV 800 1200
Jan Tor EduCamer
a1160 1280
Feb Mon Bus Laptop 1500 2500
Mar Van Edu HD 540 520
… … … … … …
root
edu hhd bus
Jan Mar Jan Feb
Tor Van Tor Mon
Q.I.Q.I. Q.I.Quant-Info
Sum: 1765Cnt: 2
bins
Attr. Val.Quant-
InfoSide-link
EduSum:2285
…Hhd …Bus …… …Jan …Feb …… …
Tor …Van …Mon …
… …
Headertable
04/21/23Data Mining: Concepts and Technique
s 18
root
Edu. Hhd. Bus.
Jan. Mar. Jan. Feb.
Tor. Van. Tor. Mon.
Q.I.Q.I. Q.I.Quant-Info
Sum: 1765Cnt: 2
bins
Attr. Val.
Quant-Info Side-link
Edu Sum:2285 …Hhd …Bus …… …Jan …Feb …… …
TorTor ……Van …Mon …
… …
Attr. Val.
Q.I.Side-link
Edu …Hhd …Bus …… …
Jan …Feb …… …
HeaderTableHTor
From (*, *, Tor) to (*, Jan, Tor)
Computing Cells Involving “City”
04/21/23Data Mining: Concepts and Technique
s 19
Computing Cells Involving Month But No City
root
Edu. Hhd. Bus.
Jan. Mar. Jan. Feb.
Tor. Van. Tor. Mont.
Q.I.Q.I. Q.I.
Attr. Val.
Quant-Info Side-link
Edu. Sum:2285 …Hhd. …Bus. …
… …Jan. …Feb. …Mar. …
… …Tor. …Van. …Mont. …
… …
1. Roll up quant-info2. Compute cells
involving month but no city
Q.I.
Top-k OK mark: if Q.I. in a child passes top-k avg threshold, so does its parents. No binning is needed!
04/21/23Data Mining: Concepts and Technique
s 20
Computing Cells Involving Only Cust_grp
root
edu hhd bus
Jan Mar Jan Feb
Tor Van Tor Mon
Q.I.Q.I. Q.I.
Attr. Val.
Quant-Info Side-link
EduSum:2285
…Hhd …Bus …
… …Jan …Feb …Mar …… …
Tor …Van …Mon …
… …
Check header table directly
Q.I.
04/21/23Data Mining: Concepts and Technique
s 21
Efficient Computation of Data Cubes
General heuristics
Multi-way array aggregation
BUC
H-cubing
Star-Cubing
High-Dimensional OLAP
04/21/23Data Mining: Concepts and Technique
s 22
Star-Cubing: An Integrating Star-Cubing: An Integrating MethodMethod
D. Xin, J. Han, X. Li, B. W. Wah, Star-Cubing: Computing Iceberg Cubes by Top-Down and Bottom-Up Integration, VLDB'03
Explore shared dimensions E.g., dimension A is the shared dimension of ACD and AD ABD/AB means cuboid ABD has shared dimensions AB
Allows for shared computations e.g., cuboid AB is computed simultaneously as ABD
C /C
A C /A C B C /B C
A B C /A B C A B D /A B A C D /A B C D
A D /A B D /B C D
D
A B C D /a ll
Aggregate in a top-down manner but with the bottom-up sub-layer underneath which will allow Apriori pruning
Shared dimensions grow in bottom-up fashion
04/21/23Data Mining: Concepts and Technique
s 23
Iceberg Pruning in Shared DimensionsIceberg Pruning in Shared Dimensions
Anti-monotonic property of shared dimensions If the measure is anti-monotonic, and if the
aggregate value on a shared dimension does not satisfy the iceberg condition, then all the cells extended from this shared dimension cannot satisfy the condition either
Intuition: if we can compute the shared dimensions before the actual cuboid, we can use them to do Apriori pruning
Problem: how to prune while still aggregate simultaneously on multiple dimensions?
04/21/23Data Mining: Concepts and Technique
s 24
Cell TreesCell Trees
Use a tree structure
similar to H-tree to
represent cuboids
Collapses common
prefixes to save memory
Keep count at node
Traverse the tree to
retrieve a particular tuple
04/21/23Data Mining: Concepts and Technique
s 25
Star Attributes and Star NodesStar Attributes and Star Nodes
Intuition: If a single-dimensional aggregate on an attribute value p does not satisfy the iceberg condition, it is useless to distinguish them during the iceberg computation E.g., b2, b3, b4, c1, c2, c4, d1, d2,
d3
Solution: Replace such attributes by a *. Such attributes are star attributes, and the corresponding nodes in the cell tree are star nodes
A B C D Count
a1 b1 c1 d1 1
a1 b1 c4 d3 1
a1 b2 c2 d2 1
a2 b3 c3 d4 1
a2 b4 c3 d4 1
04/21/23Data Mining: Concepts and Technique
s 26
Example: Star ReductionExample: Star Reduction
Suppose minsup = 2 Perform one-dimensional
aggregation. Replace attribute values whose count < 2 with *. And collapse all *’s together
Resulting table has all such attributes replaced with the star-attribute
With regards to the iceberg computation, this new table is a loseless compression of the original table
A B C D Count
a1 b1 * * 2a1 * * * 1a2 * c3 d4 2
A B C D Count
a1 b1 * * 1a1 b1 * * 1a1 * * * 1a2 * c3 d4 1a2 * c3 d4 1
04/21/23Data Mining: Concepts and Technique
s 27
Star TreeStar Tree
Given the new compressed
table, it is possible to
construct the
corresponding cell tree—
called star tree
Keep a star table at the
side for easy lookup of star
attributes
The star tree is a loseless
compression of the original
cell tree
A B C D Count
a1 b1 * * 2
a1 * * * 1
a2 * c3 d4 2
04/21/23Data Mining: Concepts and Technique
s 28
Star-Cubing Algorithm—DFS on Lattice Tree
a ll
A B /B C /C
A C /A C B C /B C
A B C /A B C A B D /A B A C D /A B C D
A D /A B D /B C D
D /D
A B C D
/A
A B /A B
B C D : 5 1
b* : 3 3 b1 : 2 6
c* : 2 7c3 : 2 1 1c* : 1 4
d* : 1 5 d4 : 2 1 2 d* : 2 8
ro o t : 5
a 1 : 3 a 2 : 2
b* : 2b1 : 2b* : 1
d* : 1
c* : 1
d* : 2
c* : 2
d4 : 2
c3 : 2
04/21/23Data Mining: Concepts and Technique
s 29
Multi-Way Multi-Way AggregationAggregation
A B C /A B CA B D /A BA C D /AB C D
A B C D
04/21/23Data Mining: Concepts and Technique
s 30
Star-Cubing Algorithm—DFS on Star-Tree
04/21/23Data Mining: Concepts and Technique
s 31
Multi-Way Star-Tree AggregationMulti-Way Star-Tree Aggregation
Start depth-first search at the root of the base star
tree
At each new node in the DFS, create corresponding
star tree that are descendents of the current tree
according to the integrated traversal ordering
E.g., in the base tree, when DFS reaches a1, the
ACD/A tree is created
When DFS reaches b*, the ABD/AD tree is created
The counts in the base tree are carried over to the
new trees
04/21/23Data Mining: Concepts and Technique
s 32
Multi-Way Aggregation (2)Multi-Way Aggregation (2)
When DFS reaches a leaf node (e.g., d*), start backtracking
On every backtracking branch, the count in the corresponding trees are output, the tree is destroyed, and the node in the base tree is destroyed
Example When traversing from d* back to c*, the a1b*c*/a1b*c* tree is output and destroyed
When traversing from c* back to b*, the a1b*D/a1b* tree is output and destroyed
When at b*, jump to b1 and repeat similar process
04/21/23Data Mining: Concepts and Technique
s 33
Efficient Computation of Data Cubes
General heuristics
Multi-way array aggregation
BUC
H-cubing
Star-Cubing
High-Dimensional OLAP
04/21/23Data Mining: Concepts and Technique
s 34
The Curse of Dimensionality
None of the previous cubing method can handle high dimensionality!
A database of 600k tuples. Each dimension has cardinality of 100 and zipf of 2.
04/21/23Data Mining: Concepts and Technique
s 35
Motivation of High-D OLAP
Challenge to current cubing methods: The “curse of dimensionality’’ problem Iceberg cube and compressed cubes: only
delay the inevitable explosion Full materialization: still significant overhead in
accessing results on disk High-D OLAP is needed in applications
Science and engineering analysis Bio-data analysis: thousands of genes Statistical surveys: hundreds of variables
04/21/23Data Mining: Concepts and Technique
s 36
Fast High-D OLAP with Minimal Cubing
Observation: OLAP occurs only on a small subset of
dimensions at a time
Semi-Online Computational Model
1. Partition the set of dimensions into shell
fragments
2. Compute data cubes for each shell fragment while
retaining inverted indices or value-list indices
3. Given the pre-computed fragment cubes,
dynamically compute cube cells of the high-
dimensional data cube online
04/21/23Data Mining: Concepts and Technique
s 37
Properties of Proposed Method
Partitions the data vertically
Reduces high-dimensional cube into a set of
lower dimensional cubes
Online re-construction of original high-
dimensional space
Lossless reduction
Offers tradeoffs between the amount of pre-
processing and the speed of online computation
04/21/23Data Mining: Concepts and Technique
s 38
Example Computation
Let the cube aggregation function be count
Divide the 5 dimensions into 2 shell fragments: (A, B, C) and (D, E)
tid A B C D E
1 a1 b1 c1 d1 e1
2 a1 b2 c1 d2 e1
3 a1 b2 c1 d1 e2
4 a2 b1 c1 d1 e2
5 a2 b1 c1 d1 e3
04/21/23Data Mining: Concepts and Technique
s 39
1-D Inverted Indices
Build traditional invert index or RID list
Attribute Value TID List List Size
a1 1 2 3 3
a2 4 5 2
b1 1 4 5 3
b2 2 3 2
c1 1 2 3 4 5 5
d1 1 3 4 5 4
d2 2 1
e1 1 2 2
e2 3 4 2
e3 5 1
04/21/23Data Mining: Concepts and Technique
s 40
Shell Fragment Cubes: Ideas
Generalize the 1-D inverted indices to multi-dimensional ones in the data cube sense
Compute all cuboids for data cubes ABC and DE while retaining the inverted indices
For example, shell fragment cube ABC contains 7 cuboids: A, B, C AB, AC, BC ABC
This completes the offline computation stage
111 2 3 1 4 5a1 b1
04 5 2 3a2 b2
24 54 5 1 4 5a2 b1
22 31 2 3 2 3a1 b2
List SizeTID ListIntersectionCell
04/21/23Data Mining: Concepts and Technique
s 41
Shell Fragment Cubes: Size and Design
Given a database of T tuples, D dimensions, and F shell fragment size, the fragment cubes’ space requirement is:
For F < 5, the growth is sub-linear Shell fragments do not have to be disjoint Fragment groupings can be arbitrary to allow for
maximum online performance Known common combinations (e.g.,<city, state>)
should be grouped together. Shell fragment sizes can be adjusted for optimal balance
between offline and online computation
O TD
F
(2F 1)
04/21/23Data Mining: Concepts and Technique
s 42
ID_Measure Table
If measures other than count are present, store in ID_measure table separate from the shell fragments
tid count sum
1 5 70
2 3 10
3 8 20
4 5 40
5 2 30
04/21/23Data Mining: Concepts and Technique
s 43
The Frag-Shells Algorithm
1. Partition set of dimension (A1,…,An) into a set of k fragments
(P1,…,Pk).
2. Scan base table once and do the following
3. insert <tid, measure> into ID_measure table.
4. for each attribute value ai of each dimension Ai
5. build inverted index entry <ai, tidlist>
6. For each fragment partition Pi
7. build local fragment cube Si by intersecting tid-lists in
bottom- up fashion.
04/21/23Data Mining: Concepts and Technique
s 44
Frag-Shells (2)
A B C D E F …
ABCCube
DEFCube
D CuboidEF Cuboid
DE CuboidCell Tuple-ID List
d1 e1 {1, 3, 8, 9}d1 e2 {2, 4, 6, 7}d2 e1 {5, 10}… …
Dimensions
04/21/23Data Mining: Concepts and Technique
s 45
Online Query Computation: Query
A query has the general form
Each ai has 3 possible values
1. Instantiated value
2. Aggregate * function
3. Inquire ? function
For example, returns a 2-
D data cube.
a1,a2,,an :M
3 ? ? * 1: count
04/21/23Data Mining: Concepts and Technique
s 46
Online Query Computation: Method
Given the fragment cubes, process a query as
follows
1. Divide the query into fragment, same as
the shell
2. Fetch the corresponding TID list for each
fragment from the fragment cube
3. Intersect the TID lists from each fragment
to construct instantiated base table
4. Compute the data cube using the base
table with any cubing algorithm
04/21/23Data Mining: Concepts and Technique
s 47
Online Query Computation: Sketch
A B C D E F G H I J K L M N …
OnlineCube
Instantiated Base Table
04/21/23Data Mining: Concepts and Technique
s 48
Experiment: Size vs. Dimensionality (50 and 100
cardinality)
(50-C): 106 tuples, 0 skew, 50 cardinality, fragment size 3. (100-C): 106 tuples, 2 skew, 100 cardinality, fragment size 2.
04/21/23Data Mining: Concepts and Technique
s 49
Experiment: Size vs. Shell-Fragment Size
(50-D): 106 tuples, 50 dimensions, 0 skew, 50 cardinality. (100-D): 106 tuples, 100 dimensions, 2 skew, 25 cardinality.
04/21/23Data Mining: Concepts and Technique
s 50
Experiment: Run-time vs. Shell-Fragment Size
106 tuples, 20 dimensions, 10 cardinality, skew 1, fragment size 3, 3 instantiated dimensions.
04/21/23Data Mining: Concepts and Technique
s 51
Experiments on Real World Data
UCI Forest CoverType data set 54 dimensions, 581K tuples Shell fragments of size 2 took 33 seconds and
325MB to compute 3-D subquery with 1 instantiate D: 85ms~1.4 sec.
Longitudinal Study of Vocational Rehab. Data 24 dimensions, 8818 tuples Shell fragments of size 3 took 0.9 seconds and
60MB to compute 5-D query with 0 instantiated D: 227ms~2.6 sec.
04/21/23Data Mining: Concepts and Technique
s 52
High-D OLAP: Further Implementation Considerations
Incremental Update: Append more TIDs to inverted list Add <tid: measure> to ID_measure table
Incremental adding new dimensions Form new inverted list and add new fragments
Bitmap indexing May further improve space usage and speed
Inverted index compression Store as d-gaps Explore more IR compression methods
04/21/23Data Mining: Concepts and Technique
s 53
Chapter 4: Data Cube Technology
Efficient Computation of Data Cubes Exploration and Discovery in
Multidimensional Databases Discovery-Driven Exploration of Data
Cubes Sampling Cube Prediction Cube Regression Cube
Summary
04/21/23Data Mining: Concepts and Technique
s 54
Discovery-Driven Exploration of Data Cubes
Hypothesis-driven exploration by user, huge search space
Discovery-driven (Sarawagi, et al.’98) Effective navigation of large OLAP data cubes pre-compute measures indicating exceptions,
guide user in the data analysis, at all levels of aggregation
Exception: significantly different from the value anticipated, based on a statistical model
Visual cues such as background color are used to reflect the degree of exception of each cell
04/21/23Data Mining: Concepts and Technique
s 55
Kinds of Exceptions and their Computation
Parameters SelfExp: surprise of cell relative to other cells at
same level of aggregation InExp: surprise beneath the cell PathExp: surprise beneath cell for each drill-
down path Computation of exception indicator (modeling
fitting and computing SelfExp, InExp, and PathExp values) can be overlapped with cube construction
Exception themselves can be stored, indexed and retrieved like precomputed aggregates
04/21/23Data Mining: Concepts and Technique
s 56
Examples: Discovery-Driven Data Cubes
04/21/23Data Mining: Concepts and Technique
s 57
Complex Aggregation at Multiple Granularities: Multi-Feature Cubes
Multi-feature cubes (Ross, et al. 1998): Compute complex queries involving multiple dependent aggregates at multiple granularities
Ex. Grouping by all subsets of {item, region, month}, find the maximum price in 1997 for each group, and the total sales among all maximum price tuples
select item, region, month, max(price), sum(R.sales)
from purchases
where year = 1997
cube by item, region, month: R
such that R.price = max(price) Continuing the last example, among the max price tuples, find
the min and max shelf live, and find the fraction of the total sales due to tuple that have min shelf life within the set of all max price tuples
04/21/23Data Mining: Concepts and Technique
s 58
Chapter 4: Data Cube Technology
Efficient Computation of Data Cubes Exploration and Discovery in
Multidimensional Databases Discovery-Driven Exploration of Data Cubes Sampling Cube
X. Li, J. Han, Z. Yin, J.-G. Lee, Y. Sun, “Sampling Cube: A Framework for Statistical OLAP over Sampling Data”, SIGMOD’08
Prediction Cube Regression Cube
04/21/23Data Mining: Concepts and Technique
s 59
Statistical Surveys and OLAP
Statistical survey: A popular tool to collect information about a population based on a sample Ex.: TV ratings, US Census, election polls
A common tool in politics, health, market research, science, and many more
An efficient way of collecting information (Data collection is expensive)
Many statistical tools available, to determine validity Confidence intervals Hypothesis tests
OLAP (multidimensional analysis) on survey data highly desirable but can it be done well?
04/21/23Data Mining: Concepts and Technique
s 60
Surveys: Sample vs. Whole Population
Age\Education High-school College Graduate
18
19
20
…
Data is only a sample of population
04/21/23Data Mining: Concepts and Technique
s 61
Problems for Drilling in Multidim. Space
Age\Education High-school College Graduate
18
19
20
…
Data is only a sample of population but samples could be small when drilling to certain multidimensional space
04/21/23Data Mining: Concepts and Technique
s 62
Traditional OLAP Data Analysis Model
Assumption: data is entire population Query semantics is population-based , e.g., What is the
average income of 19-year-old college graduates?
Age Education Income
Full Data Warehouse
DataCubeDataCube
04/21/23Data Mining: Concepts and Technique
s 63
OLAP on Survey (i.e., Sampling) Data
Age/Education High-school College Graduate
18
19
20
…
Semantics of query is unchanged Input data has changed
04/21/23Data Mining: Concepts and Technique
s 64
OLAP with Sampled Data Where is the missing link?
OLAP over sampling data but our analysis target would still like to be on population
Idea: Integrate sampling and statistical knowledge with traditional OLAP tools
Input Data Analysis Target
Analysis Tool
Population Population Traditional OLAP
Sample Population Not Available
04/21/23Data Mining: Concepts and Technique
s 65
Challenges for OLAP on Sampling Data
Computing confidence intervals in OLAP context
No data? Not exactly. No data in subspaces in cube Sparse data Causes include sampling bias and query
selection bias Curse of dimensionality
Survey data can be high dimensional Over 600 dimensions in real world example Impossible to fully materialize
04/21/23Data Mining: Concepts and Technique
s 66
Example 1: Confidence Interval
Age/Education High-school College Graduate
18
19
20
…
What is the average income of 19-year-old high-school students?
Return not only query result but also confidence interval
04/21/23Data Mining: Concepts and Technique
s 67
Confidence Interval
Confidence interval at : x is a sample of data set; is the mean of sample
tc is the critical t-value, calculated by a look-up
is the estimated standard error of the mean
Example: $50,000 ± $3,000 with 95% confidence Treat points in cube cell as samples Compute confidence interval as traditional sample
set Return answer in the form of confidence interval
Indicates quality of query answer User selects desired confidence interval
04/21/23Data Mining: Concepts and Technique
s 68
Efficient Computing Confidence Interval Measures
Efficient computation in all cells in data cube
Both mean and confidence interval are algebraic
Why confidence interval measure is algebraic?
is algebraic
where both s and l (count) are algebraic
Thus one can calculate cells efficiently at more general
cuboids without having to start at the base cuboid each
time
04/21/23Data Mining: Concepts and Technique
s 69
Example 2: Query Expansion
Age/Education High-school College Graduate
18
19
20
…
What is the average income of 19-year-old college students?
04/21/23Data Mining: Concepts and Technique
s 70
Boosting Confidence by Query Expansion
From the example: The queried cell “19-year-old college students” contains only 2 samples
Confidence interval is large (i.e., low confidence). why?
Small sample size High standard deviation with samples
Small sample sizes can occur at relatively low dimensional selections
Collect more data?― expensive! Use data in other cells? Maybe, but have to
be careful
04/21/23Data Mining: Concepts and Technique
s 71
Intra-Cuboid Expansion: Choice 1
Age/Education High-school College Graduate
18
19
20
…
Expand query to include 18 and 20 year olds?
04/21/23Data Mining: Concepts and Technique
s 72
Intra-Cuboid Expansion: Choice 2
Age/Education High-school College Graduate
18
19
20
…
Expand query to include high-school and graduate students?
04/21/23Data Mining: Concepts and Technique
s 73
Intra-Cuboid Expansion
If other cells in the same cuboid satisfy both the following
1. Similar semantic meaning
2. Similar cube value Then can combine other cells’ data into
own to “boost” confidence Only use if necessary Bigger sample size will decrease
confidence interval
04/21/23Data Mining: Concepts and Technique
s 74
Intra-Cuboid Expansion (2)
Cell segment similarity Some dimensions are clear: Age Some are fuzzy: Occupation May need domain knowledge
Cell value similarity How to determine if two cells’ samples come from the same
population? Two-sample t-test (confidence-based)
Example:
04/21/23Data Mining: Concepts and Technique
s 75
Inter-Cuboid Expansion
If a query dimension is Not correlated with cube value But is causing small sample size by drilling
down too much Remove dimension (i.e., generalize to *) and
move to a more general cuboid Can use two-sample t-test to determine similarity
between two cells across cuboids Can also use a different method to be shown later
04/21/23Data Mining: Concepts and Technique
s 76
Query Expansion
04/21/23Data Mining: Concepts and Technique
s 77
Query Expansion Experiments (2)
Real world sample data: 600 dimensions and 750,000 tuples
0.05% to simulate “sample” (allows error checking)
04/21/23Data Mining: Concepts and Technique
s 78
Query Expansion Experiments (3)
04/21/23Data Mining: Concepts and Technique
s 79
Sampling Cube Shell: Handing “Curse of Dimensionality”
Real world data may have > 600 dimensions Materializing the full sampling cube is unrealistic Solution: Only compute a “shell” around the full
sampling cube Method: Selectively compute the best cuboids to
include in the shell Chosen cuboids will be low dimensional Size and depth of shell dependent on user
preference (space vs. accuracy tradeoff) Use cuboids in the shell to answer queries
04/21/23Data Mining: Concepts and Technique
s 80
Sampling Cube Shell Construction
Top-down, iterative, greedy algorithm
1. Top-down ― start at apex cuboid and slowly expand to higher dimensions: follow the cuboid lattice structure
2. Iterative ― add one cuboid per iteration
3. Greedy ― at each iteration choose the best cuboid to add to the shell
Stop when either size limit is reached or it is no longer beneficial to add another cuboid
04/21/23Data Mining: Concepts and Technique
s 81
Sampling Cube Shell Construction (2)
How to measure quality of a cuboid? Cuboid Standard Deviation (CSD)
s(ci): the sample standard deviation of the samples in ci
n(ci): # of samples in ci, n(B): the total # of samples in B
small(ci) returns 1 if n(ci) ≥ min_sup and 0 otherwise. Measures the amount of variance in the cells of a cuboid Low CSD indicates high correlation with cube value
good High CSD indicates little correlation bad
04/21/23Data Mining: Concepts and Technique
s 82
Sampling Cube Shell Construction (3)
Goal (Cuboid Standard Deviation Reduction: CSDR) Reduce CSD ― increase query information Find the cuboids with the least amount of CSD
Overall algorithm Start with apex cuboid and compute its CSD Choose next cuboid to build that will reduce the
CSD the most from the apex Iteratively choose more cuboids that reduce the
CSD of their parent cuboids the most (Greedy selection―at each iteration, choose the cuboid with the largest CSDR to add to the shell)
04/21/23Data Mining: Concepts and Technique
s 83
Computing the Sampling Cube Shell
04/21/23Data Mining: Concepts and Technique
s 84
Query Processing
1. A more specific cuboid exists ― aggregate to the query dimensions level and answer query
2. A more general cuboid exists ― use the value in the cell
3. Multiple more general cuboids exist ― use the most confident value
If query matches cuboid in shell, use it If query does not match
04/21/23Data Mining: Concepts and Technique
s 85
Query Accuracy
04/21/23Data Mining: Concepts and Technique
s 86
Sampling Cube: Sampling-Aware OLAP
1. Confidence intervals in query processing Integration with OLAP queries Efficient algebraic query processing
2. Expand queries to increase confidence Solves sparse data problem Inter/Intra-cuboid query expansion
3. Cube materialization with limited space Sampling Cube Shell
04/21/23Data Mining: Concepts and Technique
s 87
Chapter Summary
Efficient algorithms for computing data cubes Multiway array aggregation BUC H-cubing Star-cubing High-D OLAP by minimal cubing
Further development of data cube technology Discovery-drive cube Multi-feature cubes Sampling cubes
04/21/23Data Mining: Concepts and Technique
s 88
Explanation on Multi-way array aggregation “a0b0” chunk
a0b0c0 c1 c2 c3
a0b1c0 c1 c2 c3
a0b2c0 c1 c2 c3
a0b3c0 c1 c2 c3
xxxx x x x x
x x x x
a0a0
b0 c0 c1 c2 c3
b0
b1
b2
b3
c0 c1 c2 c3
…
04/21/23Data Mining: Concepts and Technique
s 89
a0b1 chunk
a0b0c0 c1 c2 c3
a0b1c0 c1 c2 c3
a0b2c0 c1 c2 c3
a0b3c0 c1 c2 c3
yyyy xy xy xy xy
x x x x
y y y y
a0a0
b1 c0 c1 c2 c3
b0
b1
b2
b3
c0 c1 c2 c3
…
Done with a0b0
04/21/23Data Mining: Concepts and Technique
s 90
a0b2 chunk
a0b0c0 c1 c2 c3
a0b1c0 c1 c2 c3
a0b2c0 c1 c2 c3
a0b3c0 c1 c2 c3
zzzz xyz xyz xyz xyz
x x x x
y y y y
z z z z
a0a0
b2 c0 c1 c2 c3
b0
b1
b2
b3
c0 c1 c2 c3
…
Done with a0b1
04/21/23Data Mining: Concepts and Technique
s 91
Table Visualization
a0b0c0 c1 c2 c3
a0b1c0 c1 c2 c3
a0b2c0 c1 c2 c3
a0b3c0 c1 c2 c3
uuuu xyzu xyzu xyzu xyzu
x x x x
y y y y
z z z z
u u u u
a0a0
b3 c0 c1 c2 c3
b0
b1
b2
b3
c0 c1 c2 c3
Done with a0b2
04/21/23Data Mining: Concepts and Technique
s 92
Table Visualization
a1b0c0 c1 c2 c3
a1b1c0 c1 c2 c3
a1b2c0 c1 c2 c3
a1b3c0 c1 c2 c3
xxxx x x x x
xx xx xx xx
y y y y
z z z z
u u u u
a1a1
b0 c0 c1 c2 c3
b0
b1
b2
b3
c0 c1 c2 c3
…
Done with a0b3Done with a0c*
…
04/21/23Data Mining: Concepts and Technique
s 93
a3b3 chunk (last one)
a3b0c0 c1 c2 c3
a3b1c0 c1 c2 c3
a3b2c0 c1 c2 c3
a3b3c0 c1 c2 c3
uuuu xyzu xyzu xyzu xyzu
xxxx xxxx xxxx xxxx
yyyy yyyy yyyy yyyy
zzzz zzzz zzzz zzzz
uuuu uuuu uuuu uuuu
a3a3
b0 c0 c1 c2 c3
b0
b1
b2
b3
c0 c1 c2 c3
Finish
Done with a0b3Done with a0c*Done with b*c*
…
04/21/23Data Mining: Concepts and Technique
s 94
Memory Used
A: 40 distinct values B: 400 distinct values C: 4000 distinct values
ABC: Sort Order Plane AB: Need 1 chunk (10 * 100 * 1) Plane AC: Need 4 chunks (10 * 1000 * 4) Plane BC: Need 16 chunks (100 * 1000 * 16)
Total memory: 1,641,000
04/21/23Data Mining: Concepts and Technique
s 95
Memory Used
A: 40 distinct values B: 400 distinct values C: 4000 distinct values
CBA: Sort Order Plane CB: Need 1 chunk (1000 * 100 * 1) Plane CA: Need 4 chunks (1000 * 10 * 4) Plane BA: Need 16 chunks (100 * 10 * 16)
Total memory: 156,000
04/21/23Data Mining: Concepts and Technique
s 96
H-Cubing Example
root: 4
a1: 3
b1:2 b2:1 b1: 1
a2: 1
c1: 1 c2: 1 c3: 1 c1: 1
a1 3
a2 1
b1 3
b2 1
c1 2
c2 1
c3 1
(*,*,*,*) : 4
H-tablecondition: ???
H-Tree: 1Output
Side-links are used to create a virtual treeIn this example for clarity we will print theresulting virtual tree, not the real tree
04/21/23Data Mining: Concepts and Technique
s 97
project C: ??c1
root: 4
a1: 1
b1:1 b1: 1
a2: 1
(*,*,*,*) : 4(*,*,*,c1): 2
H-tablecondition: ??c1
H-Tree: 1.1Output
a1 1
a2 1
b1 2
b2 1
04/21/23Data Mining: Concepts and Technique
s 98
project ?b1c1
root: 4
a1: 1 a2: 1
(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2
H-tablecondition: ?b1c1
H-Tree: 1.1.1Output
a1 1
a2 1
04/21/23Data Mining: Concepts and Technique
s 99
aggregate: ?*c1
root: 4
a1: 1 a2: 1
(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2
H-tablecondition: ??c1
H-Tree: 1.1.2Output
a1 1
a2 1
04/21/23Data Mining: Concepts and Technique
s 100
Aggregate ??*
root: 4
a1: 3
b1:2 b2:1 b1: 1
a2: 1
a1 3
a2 1
b1 3
b2 1
(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2
H-tablecondition: ???
H-Tree: 1.2Output
04/21/23Data Mining: Concepts and Technique
s 101
Project ?b1*
root: 4
a1: 2 a2: 1
a1 2
a2 1
(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2(*,b1,*): 3(a1,b1,*):2
H-tablecondition: ???
H-Tree: 1.2.1Output
After this we also project on a1b1*
04/21/23Data Mining: Concepts and Technique
s 102
Aggregate ?**
root: 4
a1: 3 a2: 1
a1 3
a2 1
(*,*,*) : 4(*,*,c1): 2(*,b1,c1): 2(*,b1,*): 3(a1,b1,*):2(a1,*,*): 3
H-tablecondition: ???
H-Tree: 1.2.2Output
Here we also project on a1
Finish
04/21/23Data Mining: Concepts and Technique
s 103
3. Explanation of Star Cubing
root: 5
a1: 3
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
NULL
04/21/23Data Mining: Concepts and Technique
s 104
Step 1
root: 5
a1: 3
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
output
04/21/23Data Mining: Concepts and Technique
s 105
Step 2
root: 5
a1: 3
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
04/21/23Data Mining: Concepts and Technique
s 106
Step 3
root: 5
a1: 3
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
b*: 1
a1b*D/a1b*:1
04/21/23Data Mining: Concepts and Technique
s 107
Step 4
root: 5
a1: 3
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
b*: 1
a1b*D/a1b*:1
c*: 1
c*: 1
a1b*c*/a1b*c*:1
04/21/23Data Mining: Concepts and Technique
s 108
Step 5
root: 5
a1: 3
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
b*: 1
a1b*D/a1b*:1
c*: 1
c*: 1
a1b*c*/a1b*c*:1
d*: 1
d*: 1
d*: 1
04/21/23Data Mining: Concepts and Technique
s 109
Mine subtree ABC/ABC
root: 5
a1: 3
b*: 1
c*: 1
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
b*: 1
a1b*D/a1b*:1
c*: 1
c*: 1
a1b*c*/a1b*c*:1
d*: 1
d*: 1
d*: 1mine this subtreebut nothing to do
remove
04/21/23Data Mining: Concepts and Technique
s 110
Mine subtree ABD/AB
root: 5
a1: 3
b*: 1 b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
b*: 1
a1b*D/a1b*:1
c*: 1
c*: 1
d*: 1
d*: 1
d*: 1mine this subtreebut nothing to do
remove
04/21/23Data Mining: Concepts and Technique
s 111
Step 6
root: 5
a1: 3
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
(a1,b1,*):2b*: 1
c*: 1
c*: 1
d*: 1
d*: 1
b1: 2
a1b1D/a1b1:2
04/21/23Data Mining: Concepts and Technique
s 112
Step 7
root: 5
a1: 3
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
(a1,b1,*):2b*: 1
c*: 1
c*: 3
d*: 1
d*: 1
b1: 2
a1b1D/a1b1:2
c*: 2
a1b1c*/a1b1c*:2
04/21/23Data Mining: Concepts and Technique
s 113
Step 8
root: 5
a1: 3
b1: 2
c*: 2
d*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
(a1,b1,*):2b*: 1
c*: 1
c*: 3
d*: 1
d*: 3
b1: 2
a1b1D/a1b1:2
c*: 2
a1b1c*/a1b1c*:2
d*: 2
d*: 3
04/21/23Data Mining: Concepts and Technique
s 114
Mine subtree ABC/ABC
root: 5
a1: 3
b1: 2
c*: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3
(a1,b1,*):2b*: 1
c*: 1
c*: 3
d*: 1
d*: 3
b1: 2
a1b1D/a1b1:2
c*: 2
a1b1c*/a1b1c*:2
d*: 2
d*: 3mine this subtreebut nothing to do
remove
04/21/23Data Mining: Concepts and Technique
s 115
Mine subtree ABC/ABC
root: 5
a1: 3
b1: 2
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3(a1,b1,*):2
b*: 1
c*: 1
c*: 3
d*: 1
d*: 3
b1: 2
a1b1D/a1b1:2
c*: 2
d*: 2
d*: 3
mine this subtreebut nothing to do
(all interior nodes *)remove
04/21/23Data Mining: Concepts and Technique
s 116
Mine subtree ABC/ABC
root: 5
a1: 3 a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a1CD/a1:3
output
(a1,*,*) : 3(a1,b1,*):2
b*: 1
c*: 1
c*: 3
d*: 1
d*: 3
b1: 2
c*: 2
d*: 2
mine this subtreebut nothing to do
(all interior nodes *)remove
04/21/23Data Mining: Concepts and Technique
s 117
Step 9
root: 5
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3
(a1,b1,*):2
(a2,*,*): 2
b*: 1
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
04/21/23Data Mining: Concepts and Technique
s 118
Step 10
root: 5
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2b*D/a2b*:2
04/21/23Data Mining: Concepts and Technique
s 119
Step 11
root: 5
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2b*D/a2b*:2
c3: 2
a2b*c3/a2b*c3:2
c3: 2
04/21/23Data Mining: Concepts and Technique
s 120
Step 11
root: 5
a2: 2
b*: 2
c3: 2
d4: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2b*D/a2b*:2
c3: 2
a2b*c3/a2b*c3:2
c3: 2
d4: 2
d4: 2
04/21/23Data Mining: Concepts and Technique
s 121
Mine subtree ABC/ABC
root: 5
a2: 2
b*: 2
c3: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2b*D/a2b*:2
c3: 2
a2b*c3/a2b*c3:2
c3: 2
d4: 2
d4: 2
mine subtreenothing to do
remove
04/21/23Data Mining: Concepts and Technique
s 122
Step 11
root: 5
a2: 2
b*: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
a2b*D/a2b*:2
c3: 2
c3: 2
d4: 2
d4: 2
mine subtreenothing to do
remove
04/21/23Data Mining: Concepts and Technique
s 123
Mine Subtree ACD/A
root: 5
a2: 2
ABCD-Tree ABCD-Cuboid Tree
BCD:5
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
c3: 2
c3: 2
d4: 2
d4: 2
mine subtreeAC/AC, AD/A
remove
04/21/23Data Mining: Concepts and Technique
s 124
Recursive Mining – step 1ACD/A tree ACD/A-Cuboid Tree
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2
c3: 2
d4: 2
a2D/a2:2
04/21/23Data Mining: Concepts and Technique
s 125
Recursive Mining – Step 2ACD/A tree ACD/A-Cuboid Tree
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2c3: 2
d4: 2 a2c3/a2c3:2
a2D/a2:2
04/21/23Data Mining: Concepts and Technique
s 126
Recursive Mining - BacktrackACD/A tree ACD/A-Cuboid Tree
a2CD/a2:2
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2
c3: 2
d4: 2
a2c3/a2c3:2
a2D/a2:2
d4: 2
Same as beforeAs we backtrack
recursively mine child trees
04/21/23Data Mining: Concepts and Technique
s 127
Mine Subtree BCD
root: 5
ABCD-Tree ABCD-Cuboid Tree
BCD:5
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2
b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
c3: 2
d4: 2
mine subtreeBC/BC, BD/B, CD
remove
04/21/23Data Mining: Concepts and Technique
s 128
Mine Subtree BCDBCD-Tree BCD-Cuboid Tree
BCD:5
output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2
b*: 3
c*: 1
d*: 1
b1: 2
c*: 2
d*: 2
c3: 2
d4: 2
Will Skip this step
You may do it as an exercise
04/21/23Data Mining: Concepts and Technique
s 129
Finish
root: 5
ABCD-Tree ABCD-Cuboid Tree output
(a1,*,*) : 3(a1,b1,*):2(a2,*,*): 2(a2,*,c3): 2(a2,c3,d4): 2BCD tree patterns
04/21/23Data Mining: Concepts and Technique
s 130
Efficient Computation of Data Cubes
General heuristics Multi-way array aggregation BUC H-cubing Star-Cubing High-Dimensional OLAP Computing non-monotonic measures Compressed and closed cubes
04/21/23Data Mining: Concepts and Technique
s 131
Computing Cubes with Non-Antimonotonic Iceberg Conditions
J. Han, J. Pei, G. Dong, K. Wang. Efficient Computation of Iceberg Cubes With Complex Measures. SIGMOD’01
Most cubing algorithms cannot compute cubes with non-antimonotonic iceberg conditions efficiently
Example CREATE CUBE Sales_Iceberg AS
SELECT month, city, cust_grp, AVG(price), COUNT(*)
FROM Sales_Infor
CUBEBY month, city, cust_grp
HAVING AVG(price) >= 800 AND COUNT(*) >= 50
How to push constraint into the iceberg cube computation?
04/21/23Data Mining: Concepts and Technique
s 132
Non-Anti-Monotonic Iceberg Condition
Anti-monotonic: if a process fails a condition, continue processing will still fail
The cubing query with avg is non-anti-monotonic! (Mar, *, *, 600, 1800) fails the HAVING clause (Mar, *, Bus, 1300, 360) passes the clause
CREATE CUBE Sales_Iceberg AS
SELECT month, city, cust_grp,
AVG(price), COUNT(*)
FROM Sales_Infor
CUBEBY month, city, cust_grp
HAVING AVG(price) >= 800 AND
COUNT(*) >= 50
Month CityCust_gr
pProd Cost Price
Jan Tor Edu Printer 500 485
Jan Tor Hld TV 800 1200
Jan Tor EduCamer
a1160 1280
Feb Mon Bus Laptop 1500 2500
Mar Van Edu HD 540 520
… … … … … …
04/21/23Data Mining: Concepts and Technique
s 133
From Average to Top-k Average
Let (*, Van, *) cover 1,000 records Avg(price) is the average price of those 1000 sales Avg50(price) is the average price of the top-50
sales (top-50 according to the sales price Top-k average is anti-monotonic
The top 50 sales in Van. is with avg(price) <= 800 the top 50 deals in Van. during Feb. must be with avg(price) <= 800
Month CityCust_gr
pProd Cost Price
… … … … … …
04/21/23Data Mining: Concepts and Technique
s 134
Binning for Top-k Average
Computing top-k avg is costly with large k Binning idea
Avg50(c) >= 800 Large value collapsing: use a sum and a count to
summarize records with measure >= 800 If count>= 50, no need to check “small”
records Small value binning: a group of bins
One bin covers a range, e.g., 600~800, 400~600, etc.
Register a sum and a count for each bin
04/21/23Data Mining: Concepts and Technique
s 135
Computing Approximate top-k average
Range Sum Count
Over 800
28000 20
600~800 10600 15400~600 15200 30
… … …
Top 50
Approximate avg50()=
(28000+10600+600*15)/
50=952
Suppose for (*, Van, *), we have
Month City Cust_grp Prod Cost Price
… … … … … …
The cell may pass the HAVING clause
04/21/23Data Mining: Concepts and Technique
s 136
Weakened Conditions Facilitate Pushing
Accumulate quant-info for cells to compute average iceberg cubes efficiently Three pieces: sum, count, top-k bins Use top-k bins to estimate/prune descendants Use sum and count to consolidate current cell
Approximate avg50()
Anti-monotonic, can be computed
efficiently
real avg50()
Anti-monotonic, but
computationally costly
avg()
Not anti-monotoni
c
strongestweakest
04/21/23Data Mining: Concepts and Technique
s 137
Computing Iceberg Cubes with Other Complex Measures
Computing other complex measures
Key point: find a function which is weaker but
ensures certain anti-monotonicity
Examples
Avg() v: avgk(c) v (bottom-k avg)
Avg() v only (no count): max(price) v
Sum(profit) (profit can be negative): p_sum(c) v if p_count(c) k; or otherwise, sumk(c) v
Others: conjunctions of multiple conditions
04/21/23Data Mining: Concepts and Technique
s 138
Efficient Computation of Data Cubes
General heuristics Multi-way array aggregation BUC H-cubing Star-Cubing High-Dimensional OLAP Computing non-monotonic measures Compressed and closed cubes
04/21/23Data Mining: Concepts and Technique
s 139
Compressed Cubes: Condensed or Closed Cubes
This is challenging even for icerberg cube: Suppose 100 dimensions, only 1 base cell with count = 10. How many aggregate cells if count >= 10?
Condensed cube
Only need to store one cell (a1, a2, …, a100, 10), which
represents all the corresponding aggregate cells Efficient computation of the minimal condensed cube
Closed cube Dong Xin, Jiawei Han, Zheng Shao, and Hongyan Liu, “C-
Cubing: Efficient Computation of Closed Cubes by Aggregation-Based Checking”, ICDE'06.
04/21/23Data Mining: Concepts and Technique
s 140
Exploration and Discovery in Multidimensional Databases
Discovery-Driven Exploration of Data Cubes
Complex Aggregation at Multiple
Granularities: Multi-Feature Cubes
Cube-Gradient Analysis
04/21/23Data Mining: Concepts and Technique
s 141
Cube-Gradient (Cubegrade)
Analysis of changes of sophisticated measures in multi-dimensional spaces Query: changes of average house price in
Vancouver in ‘00 comparing against ’99 Answer: Apts in West went down 20%, houses
in Metrotown went up 10% Cubegrade problem by Imielinski et al.
Changes in dimensions changes in measures Drill-down, roll-up, and mutation
04/21/23Data Mining: Concepts and Technique
s 142
From Cubegrade to Multi-dimensional Constrained Gradients in Data Cubes
Significantly more expressive than association rules Capture trends in user-specified measures
Serious challenges Many trivial cells in a cube “significance
constraint” to prune trivial cells Numerate pairs of cells “probe constraint” to
select a subset of cells to examine Only interesting changes wanted “gradient
constraint” to capture significant changes
04/21/23Data Mining: Concepts and Technique
s 143
MD Constrained Gradient Mining
Significance constraint Csig: (cnt100) Probe constraint Cprb: (city=“Van”,
cust_grp=“busi”, prod_grp=“*”) Gradient constraint Cgrad(cg, cp):
(avg_price(cg)/avg_price(cp)1.3)
Dimensions Measures
cid Yr City Cst_grp Prd_grp Cnt Avg_price
c1 00 Van Busi PC 300 2100c2 * Van Busi PC 2800 1800c3 * Tor Busi PC 7900 2350c4 * * busi PC 58600 2250
Base cell
Aggregated cell
Siblings
Ancestor
Probe cell: satisfied Cprb (c4, c2) satisfies Cgrad!
04/21/23Data Mining: Concepts and Technique
s 144
Efficient Computing Cube-gradients
Compute probe cells using Csig and Cprb
The set of probe cells P is often very small
Use probe P and constraints to find gradients Pushing selection deeply
Set-oriented processing for probe cells
Iceberg growing from low to high dimensionalities
Dynamic pruning probe cells during growth
Incorporating efficient iceberg cubing method