hyper threading of intel
DESCRIPTION
This is a seminar on the multi-threading implementation by INTEL called Hyper ThreadingTRANSCRIPT
IntroductionIntroduction
Ways To Enhance Performance-
1.Increase in clock rate
oBy reducing clock cycle time
oPerformance increased by increasing number of instructions executed per second
o H/w limitations limit this feature
IntroductionIntroduction
Ways To Enhance Performance-
2.Cache hierarchies
oBy using cache memories
oFrequently used data is put in caches
oReduces average accesses time
IntroductionIntroduction
Ways To Enhance Performance-
3.Pipelining
oMultiple instructions are overlapped in execution
oLimited by the dependencies between instructions
oBasis for multi-threading.
Instruction Level Instruction Level ParallelismParallelism
o To increase the number of instructions executed in each clock cycle.
o It should be possible to simultaneously execute instructions.
Thread level parallelism
Chip Multi Processing
o Two or more processors
o Each has full set of execution and architectural
resources,.
o Are put together in a single die.
Thread level parallelism
Time Slice Multi Threading
o Only one processor
o Multiple threads executed by switching
Thread level parallelism
Switch on Event Multi Threading
o Switch threads on long latency events such as
cache misses
Hyper-Threading Hyper-Threading TechnologyTechnology
Hyper-Threading Technology first invented by
Intel Corp.
Brings the simultaneous multi-threading
approach to the Intel architecture.
A single physical processor appears like two or
more logical processors
Hyper-Threading Hyper-Threading TechnologyTechnology
Provides thread-level-parallelism (TLP) on each
processor
TLP results in increased utilization of processor
and execution resources.
Each logical processor maintain one copy of
the architecture state
Processor Execution Resources
Processor Execution Resources
Arch State Arch State Arch State
Processor with out Hyper-Threading Technology
Processor with Hyper-Threading Technology
Hyper-Threading Technology Architecture
Sharing of ResourcesSharing of Resources
Major Sharing Schemes are-
o Partitiono Thresholdo Full Sharing
Partition
Each logical processor uses half the resources
Simple and low in complexity Ensures fairness and progress Good for major pipeline queues
Partitioned Queue ExamplePartitioned Queue Example
Yellow thread – It is faster threadGreen thread – It is slower thread
ThresholdThreshold
Puts a threshold on number of resource entries a
logical processor can use.
Limits maximum resource usage
For small structures where resource utilization in burst and time of utilization is short, uniform and predictable
E g - Processor Scheduler
Full Sharing Full Sharing
Most flexible mechanism for resource sharing, do not limit the maximum uses for resource usage for a logical processor
Good for large structures in which working set sizes are variable and there is no fear of starvation
E g : All Processor caches are shared
Single-Task & Multi-Task ModesSingle-Task & Multi-Task Modes
SINGLE-TASK AND MULTI-TASK MODESSINGLE-TASK AND MULTI-TASK MODES
Operating systemOperating system For best performance, the operating system
should implement two optimizations.
◦ The first is to use the HALT instruction if one logical processor is active and the other is not. HALT will allow the processor to transition MT mode to either the ST0- or ST1-mode.
◦ The second optimization is in scheduling software threads to logical processors. The operating system should schedule threads to logical processors on different physical processors before scheduling two threads to the same physical processor.
Business Benefits of Hyper-Threading Business Benefits of Hyper-Threading TechnologyTechnology
Higher transaction rates for e-Businesses
Improved reaction and response times for end-users and customers.
Increased number of users that a server system can support
Handle increased server workloads
Compatibility with existing server applications and operating systems
LimitationsLimitations
Hyper-Threading technology cannot beat dual processors in terms of performance.
Performance to cost ratio is much higher. Hyper-Threading does not get 2x faster, but
neither do dual CPU systems
ConclusionConclusion
• Intel’s Hyper-Threading Technology brings the concept of simultaneous multi-threading to the Intel Architecture.
• It will become increasingly important going forward as it adds a new technique for obtaining additional performance for lower transistor and power costs.
• The goal was to implement the technology at minimum cost while ensuring forward progress on logical processors, even if the other is stalled, and to deliver full performance even when there is only one active logical processor.
ReferencesReferences
“HYPER-THREADING TECHNOLOGY ARCHITECTURE AND
MICROARCHITECTURE” by Deborah T. Marr, Frank Binns, David L. Hill,
Glenn Hinton,David A. Koufaty, J. Alan Miller, Michael Upton, intel Technology
Journal, Volume 06 Issue 01, Published February 14, 2002. Pages: 4 –15.
“:HYPERTHREADING TECHNOLOGY IN THE NETBURST
MICROARCHITECTURE” by David Koufaty,Deborah T. Marr, IEEE
Micro, Vol. 23, Issue 2, March–April 2003. Pages: 56 – 65.
http://cache-www.intel.com/cd/00/00/22/09/220943_220943.pdf
http://www.cs.washington.edu/research/smt/papers/tlp2ilp.final.pdf
http://mos.stanford.edu/papers/mj_thesis.pdf