application-level prefetching cs656 semester project peixan li, jinze liu, hexin wang
TRANSCRIPT
Application-level Prefetching
CS656
semester projectPeixan Li, Jinze Liu, Hexin Wang
Outline
MotivationSolutionImplementationEvaluation Test PlanConclusion
Motivation
Get Better Performance on Remote Data Access over DOS Random Data Access Sequential Data Access
e.g., Video-on-Demand Data Transfer
Problems Low Bandwidth Network
Wait for your data whenever you need it most
General-Purpose OS Inappropriate Scheduler (RoundRobin) not addressing timing constraints
Solution(I) -- Client Side
Application level Prefetching Cache Although the fact -- Low Bandwidth Network Prefetching Cache can reduce data access time
General Data ServiceHistory-based prefetching
Video-on-DemandSequential Prefetching
Solution(II) -- Server Side
Admission Control Avoiding Server Overload All the admitted tasks can be satisfied using current
scheduling algorithm
Server Level Scheduling algorithm Isochronous Tasks can run first before GP Tasks
Implementation
Client/Server ModelClient Side
Prefetch via Data Compression Cache Management
Server Side Admission Control Scheduling Algorithm
Client/Server Model
V/A PlayerNormal Application Admission Control
Schedule Control
Service Thread
Service Thread….
File Server
Controller
Prefetch Thread
Cache
Controller
Prefetch Thread
Cache
Client 1 Client 2 Server
Peixian...
Prefetch via Data Compression
Based on data compression techniquesWhy is D/C useful for prefetch
Basic law: To represent more common events with short codes and less common events with longer codes
Must be good at recording history and predicting future data
Be particularly good for databases and hypertext systems
History-based Prefetch
We use Ziv-Lempel algorithm Simple but very good Predict based on a probabilistic
history tree e.g. “aaaababaabbbab”
=> (a)(aa)(ab)(aba)(abb)(b) Sequential prefetch is used when
lack of history Prefetch thread is activated once a request is finished History tree need to be rebuilt before it becomes too large
Sequential Prefetch
Two kinds of interfaces provided client module Hread() is for history-based prefetch Sread() is for sequential prefetch More kinds of reads can be added, e.g. real-time
When sequential prefetch is used No history is needed Only future data need to be cached Semaphores are used to synchronized cache-read and
cache-write
Cache Management
Cache size dynamically grows and shrinks With default size and maximum limit In order to use memory efficiently
In order to provide better performance Use LRU replacement algorithm
Simple but good enough
No consistency issue since we only have read-only access
Admission Control(I)
Basic Assumption Isochronous Tasks
Real-time periodic tasks• MPEG-1 requires about 1.5 MbitsPS
• MPEG-2 or MPEG-4 requires about 5-10MbitsPS
Require performance guarantee for throughput, bounded latency.
General-Purpose TasksPreemptible tasksSuitable for low-priority background processing
Admission Control(II)
If a new isochronous task is to be admitted All the previous tasks must be satisfied whenever new task
is taken into account or not The new task can be satisfied under current workload
High frequency tasks run before low frequency tasksA periodic task can be satisfied means it can finish
within each period. I.e., Real Execution Time <= Period
Admission Control(III)
To admit a new Isochronous Task I.e.
n -- Total number of isochronous tasksCi -- An execution time per period of task iTi -- Period of isochronous task i
Disadvantage General-Purpose Tasks may suffer from starvation
Admission Control(V)
E.g. Task1(C1 = 6, T1 = 10); Task2(C2 = 6, T2 = 20);
T1
T2
T3
0 10 20 30 t
Scheduling algorithm
Schedule Algorithm Isochronous requests scheduled using rate monotonic
The higher frequency, the higher priority
Normal file requests scheduled with round robinCan be preempted by isochronous tasks
Jinze...
Test Plan
Test programs with different access patterns Sequential remote multimedia access. Simulated tree-like web document access. Simulated database access. Random remote file access.
To test prefetching performance with different test programs.
To test server performance with concurrent requests of different applications
Evaluation
Performance comparison -- Yes/No prefetching Cache Hit Rate Received throughput
Server performance with different tasks Correctness of Admission Control. Measurement of capacity
Unsolved Problems
Cache cannot be shared between different applications.
Cache data is lost after the termination of application program.
Cache is read-only.
Conclusion
We’ve simulated a client/server model to support application-oriented isochronous prefetching.