unit test 2 cs9213 c.n

20
Unit Test 2:CS9213-COMPUTER NETWORKS AND MANAGEMENT 1. Define window management techniques used in TCP for congestion control. Window Management Slow start Dynamic window sizing on congestion Fast retransmit Fast recovery Limited transmit Slow Start awnd=MIN [credit, cwnd) where awnd = allowed window in segments cwnd = congestion window in segments (assumes MSS bytes per segment) credit = amount of unused credit granted in most recent ack (rcvwindow) cwnd = 1 for a new connection and increased by 1 (except during slow start) for each ack received, up to a maximum. Effect of TCP Slow Start

Upload: vinod-deenathayalan

Post on 02-Apr-2015

583 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Unit Test 2 CS9213 C.N

Unit Test 2:CS9213-COMPUTER NETWORKS AND MANAGEMENT

1. Define window management techniques used in TCP for congestion control.Window Management Slow start Dynamic window sizing on congestion Fast retransmit Fast recovery Limited transmitSlow Start awnd=MIN [credit, cwnd)whereawnd = allowed window in segmentscwnd = congestion window in segments (assumes MSS bytes per segment)credit = amount of unused credit granted in most recent ack (rcvwindow)cwnd = 1 for a new connection and increased by 1 (except during slow start) for each ack received, up to a maximum.Effect of TCP Slow Start

Page 2: Unit Test 2 CS9213 C.N

Dynamic Window Sizing on Congestion

A lost segment indicates congestion Prudent (conservative) to reset cwnd to 1 and begin slow start process May not be conservative enough: “easy to drive a network into saturation but hard for

the net to recover” (Jacobson) Instead, use slow start with linear growth in cwnd after reaching a threshold value

Illustration of Slow Start and Congestion Avoidance

Fast Retransmit

RTO is generally noticeably longer than actual RTT If a segment is lost, TCP may be slow to retransmit TCP rule: if a segment is received out of order, an ack must be issued immediately for the

last in-order segment Tahoe/Reno Fast Retransmit rule: if 4 acks received for same segment (I.e. 3 duplicate

acks), highly likely it was lost, so retransmit immediately, rather than waiting for timeout.Fast Recovery

When TCP retransmits a segment using Fast Retransmit, a segment was assumed lost Congestion avoidance measures are appropriate at this point E.g., slow-start/congestion avoidance procedure This may be unnecessarily conservative since multiple ACKs indicate segments are

actually getting through Fast Recovery: retransmit lost segment, cut threshold in half, set congestion window to

threshold +3, proceed with linear increase of cwnd This avoids initial slow-start

Limited Transmit If congestion window at sender is small, fast retransmit may not get triggered, e.g., cwnd

= 3 Under what circumstances does sender have small congestion window?

Page 3: Unit Test 2 CS9213 C.N

Is the problem common? If the problem is common, why not reduce number of duplicate acks needed to trigger

retransmit.Limited Transmit AlgorithmSender can transmit new segment when 3 conditions are met:

Two consecutive duplicate acks are received Destination advertised window allows transmission of segment Amount of outstanding data after sending is less than or equal to cwnd + 2

2. Explain congestion control mechanisms for packet-switching networks. Send a control packet from a congested node to some or all source nodes. This choke

packet will have the effect of stopping or slowing the rate of transmission from sources and hence limit the total no of packets in the network. This approach requires additional traffic on the network during a period of congestion.

Rely on routing information. Routing algorithms provide link delay information to other nodes, which influences routing decisions. This information could also be used to influence the rate at which new packets are produced. Because these delays are being influenced by the routing decision, they may vary too rapidly to be used effectively for congestion control.

Make use of an end-to-end probe packet. Such a packet could be timestamped to measure the delay between two particular end points. This has the disadvantage of adding overhead to the network.

Allow packet switching nodes to add congestion information to packets as they go by. There are two possible approaches here. A node could add such information to packets going in the direction opposite to the congestion. This information quickly reaches the source node, which can reduce the flow of packets into the network. Alternatively, a node could add such information to packets going in the same direction as the congestion. The destination either asks the source to adjust the load or returns the signal back to the source in the packets (or acknowledgments) going in the reverse direction.

3. Explain ABR & GTR Traffic ManagementABR Traffic Management CBR, rt-VBR, nrt-VBR: traffic contract with open-loop control UBR: best effort sharing of unused capacity ABR: share unused (available) capacity using closed-loop control of source

– Allowed Cell Rate (ACR): current max. cell transmission rate– Minimum Cell Rate (MCR): network guaranteed minimum cell rate– Peak Cell Rate (PCR): max. value for ACR– Initial Cell Rate (ICR): initial value of ACR

ACR is dynamically adjusted based on feedback to the source in the form of Resource Management (RM) cells

RM cells contain three fields:– Congestion Indication (CI) bit– No Increase (NI) bit– Explicit Cell Rate (ER) field

Flow of Data and RM Cells – ABR Connection

Page 4: Unit Test 2 CS9213 C.N

ABR Source Reaction Rules

Variations in Allowed Cell Rate

ABR Capacity Allocation Two Functions of ATM Switches

Page 5: Unit Test 2 CS9213 C.N

– Congestion Control: throttle back on rates based on buffer dynamics– Fairness: throttle back as required to ensure fair allocation of available capacity

between connections Two categories of switch algorithms

– Binary: EFCI, CI and NI bits– Explicit rate: use of the ER field

Binary Feedback Schemes Single FIFO queue at each output port buffer

– switch issues EFCI, CI, NI based on threshold(s) in each queue Multiple queues per port - separate queue for each VC, or group of VCs

– uses threshold levels as above Use selective feedback to dynamically allocate fair share of capacity

– switch will mark cells that exceed their fair share of buffer capacityExplicit Rate Feedback Schemes

Basic scheme at switch is:1. compute fair share of capacity for each VC2. determine the current load or degree of congestion3. compute an explicit rate (ER) for each VC and send to the source in an

RM cell Several example of this scheme

– Enhanced proportional rate control algorithm (EPRCA)– Explicit rate indication for congestion avoidance (ERICA)

Congestion Avoidance using proportional control (CAPC)EPRCA

Switch calculates mean current load on each connection, called the MACR:MACR(I) = (1-a) x MACR(I-1) + a x CCR(I)Note: typical value for a is 1/16

When queue length at an output port exceeds the established threshold, update ER field in RMs for all VCs on that port as:

ER ¬ min[ER, DPF x MACR]where DPF is the down pressure factor parameter, typically set to 7/8.

Effect : lowers ERs of VCs that are consuming more than fair share of switch capacityERICA

Makes adjustments to ER based on switch load factor:Load Factor (LF) = Input rate /Target ratewhere input rate is averaged over a fixed interval, and target rate is typically 85-90% of link bandwidth

When LF > 1, congestion is threatened, and ERs are reduced by VC on a fair share basis:– Fairshare = target rate/number of VCs– Current VCshare = CCR/LF – newER = min[oldER, max[Fairshare, VCshare]]

GFR Traffic Management Simple, like UBR

Page 6: Unit Test 2 CS9213 C.N

– no policing or shaping of traffic at end-system– no guaranteed frame delivery– depends on higher level protocols (like TCP) for reliable data transfer

mechanisms Like ABR, provides capacity reservation and traffic contract for QoS

– guaranteed minimum rate without loss– Specify PCR, MCR, MBS, MFS, CDVT

Requires that network recognize frames as well as cells– in congestion, network discards whole frames, not just individual cells

GFR Mechanism

Frame-Based GCRA (F-GCRA)

4. Explain Frame Relay Congestion control mechanisms in TCP.Frame Relay Congestion Control

Page 7: Unit Test 2 CS9213 C.N

Minimize frame discard Maintain QoS (per-connection bandwidth) Minimize monopolization of network Simple to implement, little overhead Minimal additional network traffic Resources distributed fairly Limit spread of congestion Operate effectively regardless of flow Have minimum impact other systems in network Minimize variance in QoS Frame Relay Techniques

Congestion Avoidance with Explicit SignalingTwo general strategies considered:

Hypothesis 1: Congestion always occurs slowly, almost always at egress nodes – forward explicit congestion avoidance

Hypothesis 2: Congestion grows very quickly in internal nodes and requires quick action– backward explicit congestion avoidance

Congestion Control: BECN/FECN

Page 8: Unit Test 2 CS9213 C.N

FR - 2 Bits for Explicit Signaling Forward Explicit Congestion Notification

– For traffic in same direction as received frame– This frame has encountered congestion

Backward Explicit Congestion Notification– For traffic in opposite direction of received frame– Frames transmitted may encounter congestion

Explicit Signaling Response Network Response

– each frame handler monitors its queuing behavior and takes action– use FECN/BECN bits– some/all connections notified of congestion

User (end-system) Response– receipt of BECN/FECN bits in frame– BECN at sender: reduce transmission rate– FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow

Frame Relay Traffic Rate Management Parameters Committed Information Rate (CIR)

– Average data rate in bits/second that the network agrees to support for a connection

Data Rate of User Access Channel (Access Rate)– Fixed rate link between user and network (for network access)

Committed Burst Size (Bc)– Maximum data over an interval agreed to by network

Excess Burst Size (Be)– Maximum data, above Bc, over an interval that network will attempt to transfer

Committed Information Rate (CIR) Operation

Page 9: Unit Test 2 CS9213 C.N

Relationship of Congestion Parameters

Note that T=Bc/CIR

5. Explain types of traffic control functions used in atm networks & QOS parameters.Traffic Control and Congestion Functions

Traffic Control Strategy

Determine whether new ATM connection can be accommodated Agree performance parameters with subscriber Traffic contract between subscriber and network

Page 10: Unit Test 2 CS9213 C.N

This is congestion avoidance If it fails congestion may occur

– Invoke congestion controlTraffic Control

Resource management using virtual paths Connection admission control Usage parameter control Selective cell discard Traffic shaping Explicit forward congestion indication

Resource Management Using Virtual Paths

Allocate resources so that traffic is separated according to service characteristics Virtual path connection (VPC) are groupings of virtual channel connections (VCC)

Applications

User-to-user applications– VPC between UNI pair– No knowledge of QoS for individual VCC– User checks that VPC can take VCCs’ demands

User-to-network applications– VPC between UNI and network node– Network aware of and accommodates QoS of VCCs

Network-to-network applications– VPC between two network nodes– Network aware of and accommodates QoS of VCCs

Resource Management Concerns

Cell loss ratio Max cell transfer delay Peak to peak cell delay variation All affected by resources devoted to VPC If VCC goes through multiple VPCs, performance depends on consecutive VPCs and on

node performance– VPC performance depends on capacity of VPC and traffic characteristics of

VCCs– VCC related function depends on switching/processing speed and priority

Traffic Parameters

Traffic pattern of flow of cells– Intrinsic nature of traffic

Source traffic descriptor– Modified inside network

Connection traffic descriptor

Source Traffic Descriptor

Page 11: Unit Test 2 CS9213 C.N

Peak cell rate– Upper bound on traffic that can be submitted– Defined in terms of minimum spacing between cells T– PCR = 1/T– Mandatory for CBR and VBR services

Sustainable cell rate– Upper bound on average rate– Calculated over large time scale relative to T– Required for VBR– Enables efficient allocation of network resources between VBR sources– Only useful if SCR < PCR

Maximum burst size– Max number of cells that can be sent at PCR– If bursts are at MBS, idle gaps must be enough to keep overall rate below SCR– Required for VBR

Minimum cell rate– Min commitment requested of network– Can be zero– Used with ABR and GFR– ABR & GFR provide rapid access to spare network capacity up to PCR– PCR – MCR represents elastic component of data flow– Shared among ABR and GFR flows

Maximum frame size– Max number of cells in frame that can be carried over GFR connection– Only relevant in GFR

Connection Traffic Descriptor

Includes source traffic descriptor plus:-

Cell delay variation tolerance

Amount of variation in cell delay introduced by network interface and UNI

Bound on delay variability due to slotted nature of ATM, physical layer overhead and layer functions (e.g. cell multiplexing)

Represented by time variable τ

Conformance definition

Specify conforming cells of connection at UNI

Enforced by dropping or marking cells over definition

Quality of Service Parameters-maxCTD

Cell transfer delay (CTD)

Page 12: Unit Test 2 CS9213 C.N

Time between transmission of first bit of cell at source and reception of last bit at destination

Typically has probability density function (see next slide)

Fixed delay due to propagation etc.

Cell delay variation due to buffering and scheduling

Maximum cell transfer delay (maxCTD)is max requested delay for connection

Fraction α of cells exceed threshold

Discarded or delivered late

Peak-to-peak CDV & CLR

Peak-to-peak Cell Delay Variation

Remaining (1-α) cells within QoS

Delay experienced by these cells is between fixed delay and maxCTD

This is peak-to-peak CDV

CDVT is an upper bound on CDV

Cell loss ratio

Ratio of cells lost to cells transmitted

6. Retransmission timer management techniques in TCP

Retransmission Strategy

TCP relies exclusively on positive acknowledgements and retransmission on acknowledgement timeout There is no explicit negative acknowledgement Retransmission required when: Segment arrives damaged, as indicated by checksum error, causing receiver to discard segment Segment fails to arriveTimers

A timer is associated with each segment as it is sent

If timer expires before segment acknowledged, sender must retransmit Key Design Issue: value of retransmission timer

Too small: many unnecessary retransmissions, wasting network bandwidth

Page 13: Unit Test 2 CS9213 C.N

Too large: delay in handling lost segment Two Strategies Timer should be longer than round-trip delay (send segment, receive ack) Delay is variableStrategies:

Fixed timer Adaptive Problems with Adaptive Scheme

Peer TCP entity may accumulate acknowledgements and not acknowledge immediately For retransmitted segments, can’t tell whether acknowledgement is response to original transmission or retransmission Network conditions may change suddenlyAdaptive Retransmission Timer

Average Round-Trip Time (ARTT) K + 1

ARTT(K + 1) = 1 ∑ RTT(i)

K + 1 i = 1

= K ART(K) + 1 RTT(K + 1)

K + 1 K + 1

RFC 793 Exponential Averaging

Smoothed Round-Trip Time (SRTT)

SRTT(K + 1) = α × SRTT(K)

+ (1 – α) × SRTT(K + 1)

The older the observation, the less it is counted in the average.

RFC 793 Retransmission Timeout

RTO(K + 1) = Min(UB, Max(LB, β × SRTT(K + 1)))

UB, LB: prechosen fixed upper and lower bounds

Example values for α, β:

0.8 < α < 0.9 1.3 < β < 2.0Implementation Policy Options

Send Deliver

Page 14: Unit Test 2 CS9213 C.N

AcceptIn-order

In-window

RetransmitFirst-only

Batch

individual

Acknowledgeimmediate

cumulative

Part A

7. What is Back Pressure?Signals are exchanged between switching elements in adjacent stages so that the generic SE can grant a packet transmission to its upstream SE’s only within the current idle buffer capacity.

8. Difference between CBR and Real time variable bit rate(rt-VBR)Constant Bit Rate (CBR)

Requires that a fixed data rate be made available by the ATM provider. The network must ensure that this capacity is available and also polices the incoming

traffic on a CBR connection to ensure that the subscriber does not exceed its allocation.

Real time variable bit rate (rt-VBR) The faster rate is guaranteed, but it is understood that the user will not continuously

require this faster rate. A VBR connection is defined in terms of a sustained rate for a normal use and a faster

burst rate for occasional use at peak periods.

9. Define BECN and FECN.In a frame relay network, FECN (forward explicit congestion notification) is a header bit transmitted by the source (sending) terminal requesting that the destination (receiving) terminal slow down its requests for data. BECN (backward explicit congestion notification) is a header bit transmitted by the destination terminal requesting that the source terminal send data more slowly. FECN and BECN are intended to minimize the possibility that packets will be discarded (and thus have to be resent) when more packets arrive than can be handled.

Page 15: Unit Test 2 CS9213 C.N

10. Define Round-trip time(RTT)Round Trip Time, or RTT, refers to the amount of time it takes for a signal to travel from a particular terrestrial system to a designated satellite and back to its source. Round Trip Time is also referred to as Round Trip Delay. RTT may also be used to find the best possible route. The signal is generally a data packet, and the RTT time is also known as the ping time.

11. Define PCR and MCRPeak cell rate (PCR) is an ATM (Asynchronous Transfer Mode) term to describe the rate in cells per second that the source device may never exceed. Upper bound on traffic submitted by source (PCR = 1/T, where T = minimum cell spacing).Minimum Cell Rate (MCR) is an ATM ABR service traffic descriptor, in cells/sec, that is the minimum rate at which the source is always allowed to be sent. Used with ABR and GFR… minimum cell rate requested, access to unused capacity up to PCR (elastic capacity = PCR-MCR?)

12. What are the effects of congestion?Effects of congestion are,

Buffers fill Packets discarded Sources retransmit Routers generate more traffic to update paths Good packets present Delays and costs propagate

13. What is the difference between flow control & congestion control?Flow control means preventing the source from sending data that the sink will end up dropping because it runs out of buffer space. This is fairly easy with a sliding window protocol--just make sure the source's window is no larger than the free space in the sink's buffer. TCP does this by letting the sink advertise its free buffer space in the window field of the acknowledgements.Congestion control means preventing (or trying to prevent) the source from sending data that will end up getting dropped by a router because its queue is full.This is more complicated, because packets from different sources travelling different paths can converge on the same queue.

14. What are the congestion control techniques? Backpressure Policing Choke packet Implicit congestion signaling Explicit congestion signaling

Page 16: Unit Test 2 CS9213 C.N