1 lecture 7: interconnection network part i: basic definitions part ii: message passing...
TRANSCRIPT
1
Lecture 7: Interconnection Network
Part I: Basic Definitions
Part II: Message Passing Multicomputers
2
Part I: Basic Definitions
A network is characterized by its topology, routing algorithm,
switching strategy, and flow control mechanism.
3
Basic Definitions
Topology: the physical interconnection structure of the network graph; regular (most parallel machines)or irregular (WAN).
Routing algorithm: determines which routes a message may follow through the network graph.
Switch strategy: determines how the data in a message traverses its route (circuit or packet switch)
Flow control: determines when the message, or portion of it, move along its route.
5
Routing Messages
Shared Media
– Broadcast to everyone
Switched media : Needs real routing.
– Circuit switching
– Packet switching
6
System Architecture
processorprocessor MemoryMemory
SCSIcontroller
SCSIcontroller
Bus Adapter
Bus Adapter
NetworkInterface
Card
NetworkInterface
Card
NetworkSwitch
NetworkSwitch
I/O Bus
Transmission Media
System Bus
7
Shared-Media Networks
Allow single message transmission at a time
Bus-based networks: Ethernet, Fast Ethernet
Ring-based networks: IBM Token ring, FDDI
8
Shared-Media Network
Advantage
– simple design
– less expensive
– simple routing
– nature for broadcast and multicast
– scalable, but limited, within each segment
Disadvantages
– fixed channel bandwidth
– need router or gateway to go beyond each segment
– limited distance span
9
Switched Network
Allow simultaneous transmission of many
messages
Typical switch size: 8 to 64 ports
10
Types of Switches
Cell-based switching– fixed-size packets
– e.g., ATM switches (53-byte cells)
Frame-based (Packet-based) switching– variable-size packets
– e.g., Switched Ethernet (e.g., HP EtherTwist)
– e.g., FDDI switch (e.g., DEC GIGAswitch)
– e.g., Myrinet switch (wormhole routing)
11
Generic Switch Architecture
4-portswitch
LogicalCrossbar
Organization
LogicalCrossbar
Organization
Control Unit
Input buffer
Output buffer
12
Buffer Architecture
Input buffer: –natural design: FIFO
–random access buffer (more expensive)
Output buffer: more complicated–need better performance
–solve output contention
13
Buffer Architecture
Dedicated buffer (for each port)–ease of routing
–guaranteed service per channel
Shared buffer–better buffer utilization
–one channel burst may take all buffer
15
Other Switch Design
Shared-memory Switch– CNET Prelude switch
2-D mesh crossbar– Myrinet, DEC GIGAswitch
Clos network (scalable)– multistage networks
Bene Network: – Washington Univ. Gigabit ATM Network
16
Cautions on Speeds The actual application-level data rate is less
than advertised speed
– ATM: 155 Mbps ==> at most 134 Mbps (14%)
Switch delay: from input port to output port
– Myrinet: 100 nsec (8-port)
– ATM: 10-30 microseconds
– WU Gigabit ATM: 10-20 microseconds
18
Circuit switching
Set up; communication; releaseCircuits reserved for communication
Advantages: short delays (after set up) Disadvantage: not efficient for bursty
traffic due to the long setup time.
19
Packet Switching (Datagram)
Put addresses in packets; route one by one Switch determines the path
Deterministic: always follow same pathAdaptive: pick different paths to avoid congestion,
failuresRandomized routing: pick between several good
paths to balance network load
Adv: efficient; robust against failure Disadv: delay variations; misordering possible.
20
Deterministic
Circuit established from source to destination,
message picks the circuit to follow
– Determined based on source and destination address
– All packets follow the same route.
Adv: efficient; ordered; smaller jitter
Disadv: setup time; not robust; scalability.
21
Mesh: – dimension-order routing– (x1, y1) -> (x2, y2)
– first x = x2 - x1,
– then y = y2 - y1,
Hypercube: – edge-cube routing
– X = xox1x2 . . .xn -> Y = yoy1y2 . . .yn
– R = X xor Y
– Traverse dimensions of differing address in order
Tree: common ancestor
Deterministic Routing Examples
001
000
101
100
010 110
111011
22
Store and Forward vs. Cut-Through
Store-and-forward: – each switch waits for the full packet to arrive in switch
before sending to the next switch (good for WAN)
Cut-through:
– switch examines the header, decides where to send the message, and then starts forwarding it immediately
– Two approaches: (1) Virtual cut-through (2) Wormhole routing
H D D CRCD
Packet
flit flit flit
23
Store and Forward vs. Cut-Through
H a cb
H a cb
H a cb
H a cb
Node 1(source)
Node 4(destination)
Node 2
Node 3
H a cb
H a cb
H a cb
H a cb
Node 1(source)
Node 4(destination)
Node 2
Node 3
(a) Store-and-Forward
(b) Cut-through (wormhole)
H: headera, b, c : data elements
24
Switching Mechanisms
Store-and-forward
– buffer each packet
– buffer management
– support link-level ack
– good for networks with high error rate (e.g., WANs)
Cut-through
– small buffer
– low latency
– no link-level ack
– good for networks with very low error rate (e.g., LANs)
25
(1) Virtual Cut-through
To spool the blocked incoming packet into input buffer
The behavior under contention degrades to that of store-and-forward.
Requires a buffer large enough to hold the largest packet.
26
(2) Worm-hole Routing The packet s subdivided into smaller flits. The header flit knows where the train (packet) is going. All
the data flits follow the header flit. Different packets can be interleaved but flits from different
packets cannot be mixed up. When head of message is blocked, it leaves the tail of the
message in-place along the route. Potentially blocking other messages Needs only buffer the piece of the packet that is sent
between switches.
27
Performance Comparison
Let– L= packet length
– W= channel bandwidth
– D= distance (no. of nodes -1)
T store&forward= (L/W)(D+1) Twormhole = (L/W) + (F/W)xD If L>>F; Twormhole = (L/W) Implication: distance insensitive
28
Store and Forward vs. Cut-Through
Advantage–Latency reduces from function of:
number of intermediate switches X by the size of the packet
to
time for 1st part of the packet to negotiate the switches + the packet size ÷ interconnect BW
29
Congestion Control
Packet switched networks do not reserve bandwidth; this leads to contention
Solutions: – Packet discarding: If packet arrives at switch
and no room in buffer, packet is discarded (e.g., UDP)
– Flow control: prevent packets from entering until contention is reduced (e.g., freeway on-ramp metering lights)
30
Flow control: Between pairs of receivers and senders; use feedback
to tell sender when allowed to send next packet
Back-pressure: separate wires to tell to stop
Window: give original sender right to send N packets before
getting permission to send more.(TCP)
Choke packets: Each packet received by busy switch in
warning state sent back to the source via choke packet.
Source reduces traffic to that destination by a fixed % (e.g.,
ATM)