Download - Data Traffic LECT_4
1
DATA TRAFFIC
2
Bandwidth is a key concept in many telephony applications.
In radio communications bandwidth is the range of frequencies occupied by a
modulated carrier wave
In computer networking and computer science digital bandwidth, network bandwidth or just bandwidth is
a measure of available or consumed data communication resources
In many signal processing contexts, bandwidth is a valuable and limited resource
Bandwidth
3
Digital Bandwidth is the measure of how much information
can flow from one place to another in a given amount of time
two common uses analog signals digital signals
measured in bps a major factor in analyzing a network’s
performance
4
Pipe Analogy for Bandwidth
5
Highway Analogy for Bandwidth
6
Maximum Bandwidths and Length Limitations
Typical Media Max. Theoretical Bandwidth
Max. Physical Distance
50-Ohm coaxial cable (thinnet) 10-100 Mbps 185m
75-Ohm coaxial cable (thicknet) 10-100 Mbps 500m
CAT5 UTP 10 Mbps 100mCAT 5 (Fast Ethernet) 100 Mbps 100m
Multimode Fiber Optical Fiber 100 Mbps 2km
Single mode Fiber Optical Fiber 1000 Mbps (1 Gbps) 3km
Wireless 11 Mbps A few hundred meters
7
WAN Services and BandwidthsType of Service
Typical User Bandwidth
Modem Individuals 56 Kbps
ISDN Telecommuters and small businesses 128 Kbps
Frame Relay Small institutions and reliable WANs
56 Kbps to 44 Mbps
T1 Larger entities 1.544 MbpsT3 Larger entities 44.736 Mbps
STS-1 (OC-1) Phone companies/Backbones 51.840 MbpsSTS-3 (OC-3) Phone companies/Backbones 155.251 Mbps
STS-48 (OC-48) Phone companies/Backbones 2.488320 Gbps
8
Optical Carrier (OC) Optical Carrier (OC) levels describe a range of
digital signals that can be carried on SONET fiber optic network.
The number in the Optical Carrier level is directly proportional to the data rate of the bit stream carried by the digital signal.
The general rule for calculating the speed of Optical Carrier lines is when a specification is given as OC-n, that the speed will equal n × 51.84 Mbits/s
9
Optical Carrier specifications (in use) OC-1
OC-3 / STM-1x
OC-3c
OC-12 / STM-4x
OC-24
OC-48 / STM-16x / 2.5G Sonet OC-96
OC-192 / STM-64x / 10G Sonet OC-768 / STM-256x
10
OC-1 OC-1 is a SONET line with transmission speeds of up to 51.84 Mbit/s
(payload: 50.112 Mbit/s; overhead: 1.728 Mbit/s) using optical fiber.
This base rate is multiplied for use by other OC-n standards. For example, an OC-3 connection is 3 times the rate of OC-1.
OC-3 / STM-1x OC-3 is a network line with transmission speeds of up to 155.52 Mbit/s
(payload: 148.608 Mbit/s; overhead: 6.912 Mbit/s, including path overhead) using fiber optics.
Depending on the system OC-3 is also known as STS-3 (electrical level) and STM-1 (SDH).
When OC-3 is not multiplexed by carrying the data from a single source, the letter c (standing for concatenated) is appended: OC-3c
Optical Carrier specifications (in use)
11
Throughput Refers to the actual, measured
bandwidth at a specific time of day using specific Internet routes while downloading a specific file
A major factor in analyzing a network’s performance
12
Bandwidth may refer to bandwidth capacity or available bandwidth in bit/s, which typically means the net bit rate, channel capacity or the maximum throughput of a logical or physical communication path in a digital communication system.
For example, bandwidth test implies measuring the maximum throughput of a computer network.
Bandwidth and Throughput
13
Bandwidth and Throughput Factors that determine throughput
and bandwidth include:-internetworking devices-type of data being transferred-topology-number of users-user’s computer and server computer-power and weather-related outages-congestion
14
File Transfer Time Calculations
15
EXAMPLEWhich would take less time?
•Sending a floppy disk (1.44 MB) full of data over an ISDN BRI Line OR
•Sending a 10GB hard drive full of data over an OC-48 line
16
EXAMPLE ( Cont…..)T = S/BW
1- S= 1.44 MB, BW=128 Kb
Thus time = 1.44 x 1000 K Bytes x 8 Bits / 128 Kbps = 90 Seconds
2- S= 10 GB, BW= 2.488320 Gbps
Thus time = 10 G Bytes x 8 Bits/ 2.488320 Gbps = 32.15 Seconds
17
Overview Of Communications
Message The idea, thought
Source The brain
Sender The Transmitting device, the mouthChannel The medium the message travels over, airReceiver The receiving device, the earDestination
The brain
18
Basics Of Data Communications
19
Transmission Media Transmission media refers to the many types of cables and other mediums that carry the signal from the sender to the receiver.
Types of MediaGuided MediaUnguided Media
Transmission Media
20
Guided media are manufactured so that signals will be confined to a narrow path and will behave predictably.
Commonly used guided media include twisted-pair wiring, similar to common
telephone wiring; coaxial cable, similar to that used for cable
TV; and optical fibre cable.
Revision _Guided Media
21
LAN CABLING
•Widely used are 10BASE-T, 100BASE-TX, and 1000BASE-T (Gigabit Ethernet), running at 10 Mbit/s, 100 Mbit/s (also Mbps or Mbs-1), and 1000 Mbit/s (1 Gbit/s) respectively. •The BASE is short for base band, meaning that there is no frequency-division multiplexing (FDM) or other frequency shifting modulation in use; each signal has full control of wire, on a single frequency. •The T designates twisted pair cable, where the pair of wires for each signal is twisted together to reduce radio frequency interference and crosstalk between pairs (FEXT and NEXT).
22
Coaxial cable, or coax, is an electrical cable with an inner conductor surrounded by a tubular insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer (typically of fine woven wire for flexibility, or of a thin metallic foil), and finally covered with a thin insulating layer on the outside.
The term coaxial comes from the inner conductor and the outer shield sharing the same geometric axis.
Coaxial Cable Systems
23
Advantages •cheap to install •conforms to standards •widely used •greater capacity than UTP to carry more conversations (60-1200 speech circuits)
Disadvantages •limited in distance •limited in number of connections •terminations and connectors must be done properly
Coaxial Cable Systems
24
Coaxial Cable Systems
25
Twisted pair Cabling Twisted pair cabling is a form of wiring in which two
conductors (the forward and return conductors of a single circuit) are twisted together for the purposes of canceling out electromagnetic interference (EMI) from external sources
for instance, electromagnetic radiation from Unshielded Twisted Pair (UTP) cables, and crosstalk between neighboring pairs.
26
Advantages of UTP• Easy installation/termination • Cheap installation
Disadvantages of UTP• Very noisy • Limited in distance • Suffers from interference
UTP and STP
27
UTP and STP
28
UTP and STP
29
30
31
Straight through cable &Crossover CableIn a straight through cable, pins on one end
correspond exactly to the corresponding pins on the other end (pin 1 to pin 1, pin 2 to pin 2, etc.).
Using the same wiring (a given color wire connects to a given number pin, the same at both ends) at each end yields a straight through cable.
32
Crossover cable In a crossover cable, pins do not so correspond; most
often in crossover cables some cables are swapped, meaning that if pin 1 on one end goes to pin 2 on the other end, then pin 2 on the first end goes to pin 1 on the second end, and not to pin 3 or some other: such crossover cables are symmetric, meaning that they work identically regardless of which way you plug them in (if you turn the cable around, it still connects the same pins as before).
Using different wiring (a given color wire connects to one number pin at one end, and a different number pin at the other) at each end yields a crossover cable
33
A electrical cable that connects two devices directly (output of one to input of the other), also called a crosslink,
It also allows devices to communicate without a switch, hub, or router.
Cross-Over cables are used to connect two computers directly through NICs without the use of a Hub or Switch or to uplink two or more hubs, switches or routers.
such a cable that changes between two different wirings, particularly an Ethernet crossover cable.
Crossover cable
34
35
DEVICE CONNECTIONS THROUGH UTP
Straight-through cable for: Switch to Router Switch to PC/Server Hub to PC/Server
Crossover Cable for: Switch to switch Switch to Hub Hub to Hub Router to Router PC to PC Router to PC
36
Advantages of Microwave systems medium capacity medium cost can go long distances (stations are located about 30 kilometers apart in line of sight)
Disadvantages of Microwave Systems noise interference geographical problems due to line of sight requirements becoming outdated
Micro-wave systems
37
Micro-wave systems
38
Advantages of Satellite systems low cost per user (for PAY TV) high capacity very large coverage
Disadvantages of Satellite systems high install cost in launching a satellite receive dishes and decoders required delays involved in the reception of the signal
Satellite systems
39
Satellite systems
40
Many extremely thin strands of glass or plastic bound together in a sheathing which transmits signals with light beams
Can be used for voice, data, and video
Fiber-optic Cable
41
Advantages of Fiber optic cable high capacity immune to interference can go long distances
Disadvantages of Fiber optic cable costly difficult to join
Fiber-optic Cable
42
DATA TRAFFIC
43
TRAFFIC REGULATING DEVICES
.
Repeater in Data NetworkDifferent types of network cabling have their own
maximum distance that they can move a data signal.
When LAN is extended beyond it maximum run for its particular cabling type, repeaters are used.
Repeaters take the signal that it receives from the computers and other devices on the LAN and regenerates the signal
44
Problems with repeaters Repeaters do not have any capability of directing
network traffic or deciding what particular route that certain data should take
They amplify the entire signal that they receive, including any line noise.
In the worst case scenario, they pass on data traffic that is barely distinguishable from the background noise on the line.
Repeaters require a some time to regenerate the signal. This can cause a propagation delay which can affect network communication when there are several repeaters in a row.
Many network architectures limit the number of repeaters that can be used in a row. 45
A repeater connects two segments of network cable. It retimes and regenerates the signals to proper amplitudes and sends them to the other segments.
When talking about, ethernet topology, we are probably talking about using a hub as a repeater.
Repeaters work only at the physical layer of the OSI network model.
Repeater in Data Network
47
Repeater in Data Network
48
RULE
Between any two nodes on the network, there can only be a maximum of five segments, connected through four repeaters/Concentrators, and only three of the five segments may contain user connections.
LAYER 2 DEVICES AND EFFECTS ON DATAFLOW
NIC (Network Interface Card) Connect your computer with network. Provide MAC addresses to each connection. Implement CSMA/CD algorithm.
Bridge Forward or filter frame by MAC address.
Switch Multi-port bridge.
NIC
Media Access Control (MAC) Every computer has a unique way of identifying
itself: MAC address
MAC addresses are sometimes referred to as burned-in addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the NIC initializes.
0000.0c12.3456 or 00-00-0c-12-34-56.
Physical address. The physical address is located on the Network Interface
Card (NIC).
Media Access Control (MAC)
• Ethernet’s MAC performs three functions:
• transmitting and receiving data packets • decoding data packets and checking them
for valid addresses before passing them to the upper layers of the OSI model
• detecting errors within data packets or on the network
Media Access Control (MAC)
A DData A DData A DDataA DData
Source AddressDestination Address
Limitation of MAC
MAC does not work well in internetwork.
Hardware dependent.
55
Data CollisionA data collision is the simultaneous presence
of signals from two nodes on the network.
A collision can occur when two nodes each think the network is idle and both start transmitting at the same time. Both packets involved in a collision are broken into fragments and must be retransmitted.
In an Ethernet network, a collision is the result of two devices on the same Ethernet network attempting to transmit data at exactly the same time. The network detects the "collision" of the two transmitted packets and discards them both.
56
Data Collision
Two methods of resolving the collision problem:
Collision Detection (Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
Collision Avoidance (Carrier Sense Multiple Access - Collision Avoidance (CSMA/CA)
Ethernet uses CSMA/CD as its method of allowing devices to "take turns" using the signal carrier line. When a device wants to transmit, it checks the signal level of the line to determine whether someone else is already using it. If it is already in use, the device waits and, perhaps in a few seconds. If it isn't in use, the device transmits.
However, two devices can transmit at the same time in which case a collision occurs and both devices detect it. Each device then waits a random amount of time and retries until successful in getting the transmission sent.
Method is used almost exclusively on star and bus topology networks eg. Ethernet
Based on a half-duplex protocol
Only one workstation can transmit at a time.
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
To detect if a collision occurs a workstation listens its own transmission as data is
being transmitted a workstation should only hear its own data being
transmitted.
If there is a collision the data transmission will be corrupted
To ensure the collision is clearly recognized a station detecting a collision will send a jamming
signal to all stations on the network.
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
Collision Detection Problems:
as concurrent users increase, the number ofcollisions increase
lots of retransmission during peak times
Reduce throughput on the network
stations have to wait for a collision to take place and then solve the problem.
Carrier Sense Multiple Access - Collision Detection (CSMA/CD)
Carrier Sense Multiple Access - Collision Avoidance (CSMA/CA)
Prevents collisions
Station must gain permission before transmitting.
Token passing in CSMA/CAA token is passed from station to station
Used in token ring networks
Token passing ensures that only one station can transmit at any one time
A station needs to get an empty token before it can transmit – a station inserts a message into the token – then sends the token on to its destination address
There should only ever be one token in circulation on the network.
Token passing in CSMA/CA Token contains data in a packet (data, source address and
destination address)
Each station checks an incoming packet’s destination address
When packet arrives at its destination, it is copied into a buffer and modified to indicate acceptance
Then the token (still containing the data) is passed on round the loop until it returns to the sender.
The sender is responsible for removing the data from the token and passing the empty token on to the next station.
Token passing in CSMA/CADisadvantages:
complexity of the software needed to maintain token passing excessive overheads can reduce performance of the network
Problems:
What happens when a token disappears? (ie. a station fails and so does not forward a token)
If a token disappears, who generates a new token?
Is it possible for there to be two or more tokens on a ring?
Comparison of methodsCSMA/CD
simple protocol high transmission speed (up to 1000 Mbps) as traffic increases, collisions increase
re-transmissions increase non-deterministic
it is not possible to determine exactly when a workstation will be able to transmit without a collision
Token-passing performs well under heavy loads slower transmission speed (up to 100 Mbps) suitable for applications that require uniform response times complex software is required more expensive to implement.
68
Flooding
Most often occurs when a large enough number of packets are flowing through the network that regular data cannot be sent in a normal speed and fashion.
Flooding can be costly in terms of wasted bandwidth and, as in the case of a Ping flood or a Denial of service attack, it can be harmful to the reliability of a computer network.
Duplicate packets may circulate forever, unless certain precautions
are taken:
Use a hop count or a time to live count Time to live (abbreviated as TTL is a limit on the period of time or number of iterations or transmissions in computer and computer network technology that a unit of data (e.g. a packet) can experience before it should be discarded.) and include it with each packet. This value should take into account the number of nodes that a packet may have to pass through on the way to its destination.
Have each node keep track of every packet seen and only forward each packet once.
Flooding Problems
70
HUBSAlso called a Concentrator or Multi-port Repeater.
Types of HubPassive Hub
A passive hub serves as a physical connection point only. It does not boost or clean the signal and does not need electrical power.
Active HubAn active hub needs power to repeat the signal before passing it out the other ports.
Intelligent HubIntelligent or smart hubs are active hubs with a microprocessor chip and diagnostic capabilities
71
•Devices attached to a hub receive all traffic traveling through the hub.
•The more devices there are attached to the hub, the more likely there will be collisions.
•A collision occurs when two or more workstations send data over the network wire at the same time. All data is corrupted when that occurs.
•Every device connected to the same network segment is said to be a member of a collision domain.
HUBS
72
A network bridge connects multiple network segments at the data link layer (Layer 2) of the OSI mode
Layer 2 switch is very often used interchangeably with bridge.
Bridges are similar to repeaters or network hubs
However, with bridging, traffic from one network is managed rather than simply rebroadcast to adjacent network segments. Bridges can analyze incoming data packets to determine if the bridge is
able to send the given packet to another segment of the network.
Bridges
Bridges tend to be more complex than hubs or repeaters.
Bridging takes place at the data link layer of the OSI model, a bridge processes the information from each frame of data it receives.
In an Ethernet frame, this provides the MAC address of the frame's source and destination.
Bridges
Bridges Connect network segments.
Make intelligent decisions about whether to pass signals on to the next segment.
Improve network performance by eliminating unnecessary traffic and minimizing the chances of collisions.
Divides traffic into segments and filters traffic based on MAC address.
Often pass frames b/w networks operating under different Layer 2 protocols.
75
Bridges
76
•If the destination device is on a different segment, the bridge forwards the frame to the appropriate segment. •If the destination address is unknown to the bridge, the bridge forwards the frame to all segments except the one on which it was received. This process is known as flooding.
Bridges
77
A network switch is a computer networking device that connects network segments.
The term “Switch” commonly refers to a Network bridge that processes and routes data at the Data link layer (layer 2) of the OSI model.
Switches that additionally process data at the Network layer (layer 3 and above) are often referred to as Layer 3 switches or Multilayer switches.
The term network switch does not generally encompass unintelligent or passive network devices such as hubs and repeaters.
Switches
78
Switches
79
A switch has many ports with many network segments connected to them. A switch chooses the port to which the destination device or workstation is connected. Ethernet switches are becoming popular connectivity solutions replacing hubs...
•reduces network congestion •maximizes bandwidth •reduces collision domain size
Switches
80
LAN and LAN Devices
04/27/23 8181
Common LAN Technologies Non-deterministic: “First come, first serve”.
Ethernet : CSMA/CD.
Deterministic: “Let’s take turns”.Token-Ring
FDDI ( Fiber distributed data interface)It is a token-passing, fiber ring, network.The fiber optic media can be multimode fiber and can be as large as 100 kilometers - with no more than 2 kilometers between nodes
Common LAN Technologies
Ethernet: logical broadcast topology
Token Ring: logical token ring topology
FDDI: logical token ring topology
83
Deterministic(taking turns)
Non-Deterministic(1st come 1st
served)
Common LAN Technologies
Deterministic MAC protocol
Non-deterministic MAC protocol
Carrier Sense Multiple Access with Collision Detection (CSMA/CD).
LAN Switch Switches connect LAN segments.
LAN switches are considered multi-port bridges with no collision domain.
Uses a MAC table to determine the segment on which a frame needs to be transmitted.
Switches often replace shared hubs and work with existing cable infrastructures.
Higher speeds than bridges.
Support new functionality, such as VLAN.
LAN Switch (cont.)
LAN Switch: MAC table
In computer networking, a Media Access Control address (MAC address) is a unique identifier assigned to most network adapters or network interface cards (NICs) by the manufacturer for identification, and used in the Media Access Control protocol sublayer
89
Segment / Segmentation of Network
A network segment is a portion of a computer network wherein every device communicates using the same physical layer.
In the context of Ethernet networking, the network segment is also known as the collision domain. This comprises the group of devices that are connected to the same bus, and that can make CSMA/CD collisions with each other, and sniff their packets . It also includes devices connected to the same hub, which also can have collisions with each other.
In modern switch-based Ethernet configurations, the physical layer is generally kept as small as possible to avoid the
possibility of collisions. Thus, each segment is only composed of two devices, and the segments
are linked together using switches and routers to form one or more broadcast domains
90
Limiting the collision Domain _Segmentation
A collision domain is a logical area in a computer network in which data packets can collide with one another.
Why segment LANs?Isolate traffic between segments.
Achieve more bandwidth per user by creating smaller collision domains.
LANs are segmented by devices like Bridges Switches and routers.
Extend the effective length of a LAN, permitting the attachment of distant stations.
LAN Segmentation
1 Segmentation with bridges
Bridges increase the latency (delay) in a network by 10-30%.
A bridge is considered a store-and-forward device because it must receive the entire frame and compute the cyclic redundancy check (CRC) before forwarding can take place.
The time it takes to perform these tasks can slow network transmissions, thus causing delay.
1 Segmentation with bridges
2 Segmentation with switches
Allows a LAN topology to work faster and more efficiently.
Uses bandwidth efficiently.
Ease bandwidth shortages and network bottlenecks .
A computer connected directly to an Ethernet switch is its own collision domain and accesses the full bandwidth .
2 Segmentation with switches
3 Segmentation with routers
Routers operates at the network layer
Routers bases all of its forwarding decisions on the Layer 3 protocol address.
Routers ability to make exact determinations of where to send the data packet.
3 Segmentation with routers
100
Wide Area Network (WAN)A wide area network (WAN) is a computer
network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries ).
The largest and most well-known example of a WAN is the Internet
WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations
WANs are built for one particular organization . LAN are private and built by Internet service providers
and provide connections from an organization's LAN to the Internet.
WANs are often built using leased lines or circuit switching or packet switching .
At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other.
Wide Area Network (WAN)
04/27/23 102
Network protocols (including TCP/IP) deliver transport and addressing functions.
Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs.
X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.
Typical communication links used in WANs are telephone lines, microwave links & satellite channels.
Wide Area Network (WAN)
103
WAN and WAN Devices
104
Each time a packet is switched from one router interface to another the packet is de-encapsulated then encapsulated once
again.
Packet propagation and switching within a router
Data terminal equipment (DTE)
DTE is an end instrument that converts user information into signals or reconverts received signals.
A DTE is the functional unit of a data station that serves as a data source or a data sink and provides for the data communication control function
105
A DTE device communicates with the data circuit-terminating equipment (DCE).
It is also called “Data Communications Equipment and “Data Carrier Equipment”.
A Data circuit-terminating equipment (DCE) is a device that sits between the data terminal equipment (DTE) and a data transmission circuit.
DCE performs functions such as signal conversion, coding, and line clocking and may be a part of the
DTE or intermediate equipment. 106
Data circuit-terminating equipment (DCE)
Interfacing equipment may be required to couple the data terminal equipment (DTE) into a transmission circuit or channel and from a transmission circuit or channel into the DTE.
Usually the DTE device is the terminal (or computer), and the DCE is a modem.
107
Data circuit-terminating equipment (DCE)
It is a digital-interface device used to connect a DTE (such as a router) to a digital circuit (for example a T1 or T3 line).
A CSU/DSU operates at the physical layer of the OSI model.
CSU/DSUs are also made as separate physical products or DSU or both functions may be included as part of an interface card inserted into a DTE.
108
CSU/DSU (Channel Service Unit/Data Service Unit)
When CSU/DSU is external the DTE interface is usually compatible with the V.xx or RS-232C
or similar serial interface.
Digital lines require both a channel service unit (CSU) and a data service unit (DSU).
CSU provides termination for the digital signal and ensures connection
integrity through error correction and line monitoring. DSU
converts the data encoded in the digital circuit into synchronous serial data for connection to a DTE device.
109
CSU/DSU (Channel Service Unit/Data Service Unit)
110
WAN Serial Connections
TIA (Telecommunications Industry Association)
111
RouterRouter is a networking device whose software
and hardware are usually tailored to the tasks of routing and forwarding information.
For example, on the Internet, information is directed to various paths by routers.
Edge Router : Placed at the edge of an ISP network, it speaks eBGP
(external Border Gateway Protocol)
Subscriber Edge Router: Located at the edge of the subscriber's network, it speaks eBGP to
its provider's AS(s). It belongs to an end user (enterprise) organization.
Inter-provider Border Router: Interconnecting ISPs, this is a BGP speaking router that maintains
BGP sessions with other BGP speaking routers in other providers AS(s)
Core router: A router that resides within the middle or backbone of the LAN
network rather than at its periphery
Routers for Internet connectivity and internal use
113
Routers and Serial Connections
114
Fixed Interfaces
115
ERLANG IN PACKET SWITCHINGExample:
Suppose a bandwidth of 64 Kbps .If one packet contains 64 Bytes and throughput is 60 percent,And with 90 % pay load, how much user actual data must be transferred in one hour to produce one Erlang traffic ?
SOLUTION: Data transferred in one second = 64 K bits Data transferred in one hour = 64 K bits x 3600 = 230,400 K
bits = 230,400/8 = 288,00
KBytes= 28.8 MB
116
Packets transferred in one hour = 28.8*1000000 / 64=450,000Packets transferred in one second = 450000/60*60= 125 pps
Throughput 60%Actual data transferred = 125 pps x 0.6
= 75 ppsFinallyActual user data transferred = 75 x 0.9 = 67.5 pps
In one hour actual user data transferred = 67.5 x 3600 = 243,000 Packets
Hence , 243,000 Packets must be transferred in one hour to produce one Erlang traffic .
ERLANG IN PACKET SWITCHING
117
OBSERVATIONS ON USER TRAFFIC Different usage at different times (Different peaks) Throughput limitations of each individual user Observations has revealed that more than 80 % of the user time (on the average) is idle Idle time (Gap) must be filled using various tactics
- Low rates at off-peak time - Introduction of new services
More data can be accommodated using compression techniques
Data traffic is bursty in nature. So totally different from that of voice traffic Data variations within range of milli-seconds to years
118
TYPICAL NATURE OF DATA TARFFICData to be sent/received through the media having a specific band-width can be carried at much lower rates than Actual BW
The only difference in this case will be time factorAt any time, the through-put is minimum, the data will take more time to be transferred and vice versa
Queuing SystemDelay is acceptable (unlike circuit switching)Bursts of traffic can be handled at lower speede.g, Using dial-up connection, 8-16 PCs can be connected through Hub/Switch. Although dial-up connection have a max Through-put of 35 Kbps
119
USERS TRAFFIC CONTINUD…
User-2
0
2
4
6
8
10
12
User-2
User-3
0
2
4
6
8
10
12
User-3
0
2
4
6
8
10
12
Series1
012345678
Series1
0
2
4
6
8
10
12
Series1
0
2
4
6
8
10
12
Series1
0
2
4
6
8
10
12
User-1
User-2
User-3
User-4
User-5
User-6
120
TimeUser
1 Kpbs
User 2
Kpbs
User 3
Kpbs
User 4
Kpbs
User 5
Kpbs
User 6
KpbsTOTAL
0000-0030 0 0 1 2 1 3 70031-0100 1 1 1 0 0 2 50100-0130 0 1 0 3 6 0 100130-0202 0 0 6 0 0 1 70200-0230 6 2 3 2 0 0 130230-0300 0 0 0 0 0 0 00300-0330 0 0 0 0 0 0 00330-0400 0 1 0 6 0 0 70400-0430 0 1 0 0 0 3 40430-0500 3 2 0 0 0 6 110500-0530 0 0 2 0 0 0 20530-0600 0 0 1 6 0 5 120600-0630 0 0 1 0 0 0 10630-0700 7 0 1 0 0 0 80700-0730 0 0 0 1 0 0 10730-0800 0 10 1 0 0 0 110800-0830 0 2 0 0 0 0 20830-0900 0 0 0 0 10 0 100900-0930 0 0 0 0 0 2 20930-1000 0 0 1 0 6 0 71000-1030 0 0 0 0 0 10 101030-1100 0 0 10 2 0 0 121100-1130 0 0 0 0 0 2 21130-1200 5 0 1 0 0 0 61200-1230 0 1 0 4 0 0 51230-1300 0 0 0 0 5 0 51300-1330 0 0 0 0 0 1 11330-1400 0 1 0 1 0 0 21400-1430 2 0 0 0 0 0 21430-1500 0 0 0 0 0 0 01500-1530 0 1 0 5 0 0 61530-1600 0 10 6 0 0 0 161600-1630 0 0 1 0 10 0 111630-1700 0 0 0 0 0 0 01700-1730 0 2 5 0 0 0 71730-1800 0 1 0 0 1 0 21800-1830 0 1 0 0 1 0 21830-1900 1 0 0 0 0 2 31900-1930 0 0 1 6 0 0 71930-2000 0 0 0 0 0 0 02000-2030 1 0 0 0 0 0 12030-2100 0 0 0 0 0 0 02100-2130 0 0 0 0 0 0 02130-2200 0 0 0 0 10 0 10
0
1
2
3
4
5
6
7
8
Series1
User 2 Kpbs
0
2
4
6
8
10
12
User 2 Kpbs
User 3 Kpbs
0
2
4
6
8
10
12
User 3 Kpbs
0
2
4
6
8
10
12
Series1
0
2
4
6
8
10
12
Series1
USERS TRAFFIC CONTINUD…
121
USERS ACCUMULATIVE EFFECT
0
24
6
8
1012
14
1618
20
Series1
122
BASIC DATA TRAFFIC CONSIDERATION Bandwidth increases from user to service provider similar to water supply system
Small pipes- Individual users
Medium pipes- ISPs, Corporate having lease lines
Big pipes- Back-bone service provider
124
PTCL Data Network / Core Network
Class Lecture
125
PACKET SWITCHING VERSUS CIRCUIT SWITCHING In case of circuit switching all the links from source to destination are occupied during voice or data transmission. Yet packet switching works on sharing principles
- In case of ISDN, E1 or PCM a 64Kbps channel carries voice or data over circuit switching
- In case of packet switching following approaches are used:
PVCSVCDATAGRAM
126
Frame Relay differs from X.25 in several aspects.
Most importantly, it is a much simpler protocol that works at the data link layer rather than the network layer.
Frame Relay implements no error or flow control.
The simplified handling of frames leads to reduced latency
Most Frame Relay connections are PVCs rather than SVCs.
Frame Relay provides permanent shared medium bandwidth connectivity that carries both voice and data traffic.
127
X.25 NetworkIt was designed at a time when transmission system was unreliable
Extensive error checking and flow control is exercised at Data link Layer and Network layer levels
Copy of each frame is retained by every switch into its buffers, and is discarded only after receiving acknowledgement from next device
Only one-fourth of traffic on X.25 network is actual message data, the rest is reliability
128
Frame Relay- AdvantagesFrame relay operates at higher speeds up to 45Mbps
Integrates well with modern sophisticated end devices and higher layer protocols
Allows bursty data
Allows frame size of up to 9000 bytes, which can accommodate all types of LAN frames
Less expensive than other traditional WANs
129
Frame Relay versus Pure Mesh T-Line Network
Without proper analysis and calculations, the link will not be able to accommodate appropriate number of users. ( Example of ISP introducing more & more cards)
131
NUMBER OF USERS VERSUS BANDWIDTH A user can be accommodated (on the average) at 3-4Kbps Consider a 2Mbps linkSuppose on the average a user can be accommodated at 5Kbps to accomplish his job during peak load hour with some delay .
Ideally speaking:How many users can be accommodated simultaneously
= 2 Mbps/5 Kbps =400 usersIf 25 % of the users are on-line simultaneously at the peak hour then total number of users for which 2 Mbps link can be sufficient is:
= 400 x 4= 1600 usersEven if we allow user with double delay 3200 users can be accommodatedThese calculation require previous data.
132
INTERNET TRAFFIC DESIGN PROBLEM
Consider a set-up in which four ISPs each having 2 Mbps link with the ITI. Following information about ISPs is given in the Table:
PRIs connected with the central offices Two types of customers ISDN BRI & Dial-up PRI Average channel occupancy at peak hour Average speed per customer in Kpbs Number of customers on-line
133
TABLE SHOWING DATA
BRI DIAL-UP BRI DIAL-UP BRI DIAL-UPa b c d e f g h I j
ISP-1 10 2 120 2500 100% 55 25 50 225ISP-2 8 2 100 1500 70% 50 20 35 120ISP-3 4 2 80 1000 80% 60 18 20 40ISP-4 6 2 90 1200 90% 55 30 30 120
CUSTOMER TYPE and STRENGTH
PRI CHANNEL OCCUPANCY
AT PEAK HOUR %
AVERAGE SPEED per CUSTOMER IN Kbps Customers on line
ISPs PRIs LINK (Mbps)
134
DESIGN PROBLEMS
Calculate the BW requirement for each ISP
Find out that in ideal conditions which ISPs can handle traffic at peak hour without any delay
Suppose buffer at ISPs end has enough space, find out the time taken by each ISP to transmit the given data
If compression 8:1 is used by those ISPs which show slight delay in data transmission, find out the total BW requirement at the ITI side. Also find max number of customers that can be connected on-line through all ISPs in this setup:
135
SOLUTION
BRI DIAL-UP BRI DIAL-UP BRI DIAL-UP BRI DIAL-UPa b c d=b x 30 e f g h=d x g I j k l m n 0=m + n
ISP-1 10 2 300 120 2500 100% 300 55 25 50 225 2.75 5.625 8.375ISP-2 8 2 240 100 1500 70% 168 50 20 35 120 1.75 2.4 4.15ISP-3 4 2 120 80 1000 60% 72 60 18 20 40 1.2 0.72 1.92ISP-4 6 2 180 90 1200 90% 162 55 30 30 120 1.65 3.6 5.25
Total Bandwidth required (Mbps)
Customers on line
TOTAL BW REQ (Mbps)
PRIs LINK (Mbps)
OCCUPIED CHANNELS
AVERAGE SPEED per CUSTOMER
IN Kbps ISPsCHANNEL
OCCUPANCY AT PEAK HOUR %
CUSTOMER TYPECHANNELS
136
Solution Cont……
0=m + n p q=o/88.375 4.1875 1.0474.15 2.075 0.5191.92 1.92 1.9205.25 2.625 0.656
4.142
TOTAL BW REQ (Mbps)
TIME REQUIRED
(sec)
BW required after 1:8 in
Mbps
139
140
DATA TRAFFIC THEORYIn order to sustain data traffic through the networks, following two parameters must be considered
CONGESTION CONTROLTry to avoid congestion for the traffic
QUALITY OF SERVICETry to create appropriate environment for the traffic
DATA TRAFFIC THEORYIn congestion control we try to avoid traffic In congestion control we try to avoid traffic congestion. In quality of service, we try to create congestion. In quality of service, we try to create an appropriate environment for the traffic. So, an appropriate environment for the traffic. So, before talking about congestion control and before talking about congestion control and quality of service, we discuss the data traffic quality of service, we discuss the data traffic itself.itself.
Traffic DescriptorTraffic Profiles
142
TRAFFIC DESCRIPTORSAVERAGE DATA RATE
Ratio of total bits sent during a specific period of time to that time (Usually in seconds)It indicates Average BW needed for traffic flow.
PEAK DATA RATE
Maximum amount of data rate that passes through a link during a given observation period. It indicates the peak BW required for traffic without any delay.
143
TRAFFIC DESCRIPTORSMAXIMUM BURST SIZE
The maximum length of time the traffic is generated at the peak rate. If steady traffic of 1 Mbps gives a spike of 2Mbps for 1 ms, this is referred to as Peak data rate. But if 2 Mbps continues for 60 ms, this is the Max. burst size and it can be a problem for network to handle.
EFFECTIVE BANDWIDTHBW that the network needs to allocate for the flow of traffic.
It is function of above three factors and requires complex calculations.
TRAFFIC DESCRIPTORS
TRAFFIC PROFILESDepending on the traffic data rates, data traffic is divided into three profiles
1-CONSTANT BIT RATE (FIXED RATE)
2- VARIABLE BIT RATE (VBR)
3- BURSTY
TRAFFIC PROFILES
TRAFFIC PROFILESDepending on the traffic data rates, data traffic is divided into three profiles1-CONSTANT BIT RATE (FIXED RATE)
Average and peak data rates are almost same.
Maximum burst size not applicable
Predictable...so easy to handle
Bandwidth allocation is easier to determine
148
TRAFFIC PROFILES Continued….2- VARIABLE BIT RATE (VBR)
Rate changes in time
Smooth changes, not sharp/sudden
Average and peak data rates are different
Maximum burst size is usually a small value
More difficult to handle
Normally does not require reshaping
149
TRAFFIC PROFILES Continued….3- BURSTY
Data rate changes abruptly
May jump from zero to Mbps in few microseconds
Average and peak bit rates are quite different
Maximum burst size is significant value
Being unpredictable profile, most difficult to handle
Traffic Reshaping techniques are required
Main cause of congestion
150
CONGESTIONCongestion occurs when the Number packets sent on the Network (Load) becomes greater than the Network Capacity (Throughput)CONGESTION CONTROL
involves Techniques and mechanisms to keep the load below the maximum capacity of the network.
CONGESTION HAPPENS when any system involves waiting e.g. Road Traffic accident, or overloaded road during rush hours creates blockage
IN NETWORKS, congestion happens when Networking devices encounter packet queues in their buffers before and after processing
151
Router in the figure has an Input Queue and an Output Queue for each Interface.
Packet arriving at input Interface undergoes three main processes
It is put at the end of queue to be checked on its turn
On its turn, Router checks the destination address in the packets to find appropriate interface for routing using Routing Table.
It is put to appropriate interface in output queue and waits for its turn to be sent
CONGESTION
152
A- If the rate of Packet arrival is higher than Packet processing rate, The input queue becomes longer and longer
B- If packet departure rate is less than the packet processing rate, the output queue becomes longer and longer
CONGESTION
153
NETWORK PERFORMANCE
Congestion control involve two factor to measure the performance of a network
Delay Vs Load
Throughput Vs Load
NETWORK PERFORMANCE _ 1.DELAY VERSUS LOAD
When Load is much less than capacity Delay is negligible and comprises Propagation delay and Processing Delay
When load reaches Network Capacity Delay increases sharply Waiting times of queue in all the routers in the path is added
When load is greater than capacity Delay becomes infinite Queues become longer and longer and buffer becomes full….Packets loss
NETWORK PERFORMANCE _ 1.DELAY VERSUS LOAD
Delay have negative effect on load’’’….Delay results in CONGESTION
Retransmissions in case of Acknowledgement timeout add more congestion
156
NETWORK PERFORMANCE _ 2. THROUGHPUT VERSUS LOADNumber of bits passing through a point per second
Replace bits by Packets and point by Network
When load is below capacity Throughput is directly proportional to Load
When and after load reaches capacity Throughput is expected to remain constant but throughput declines sharply .The reason is discarding of Packets by Routers
Discarding Packet does not deduce the number of Packets offered to Network.. ?
157
NETWORK PERFORMANCE _ 2. THROUGHPUT VERSUS LOAD
24.158
CONGESTION CONTROLCongestion control refers to techniques and Congestion control refers to techniques and mechanisms that can either prevent congestion, mechanisms that can either prevent congestion, before it happens, or remove congestion, after it before it happens, or remove congestion, after it has happened. has happened. In general, we can divide congestion control In general, we can divide congestion control mechanisms into two broad categories: open-mechanisms into two broad categories: open-loop congestion control (prevention) and closed-loop congestion control (prevention) and closed-loop congestion control (removal).loop congestion control (removal).
159
CONGESTION CONTROL
Topics discussed in this sectionTopics discussed in this sectionOpen-Loop Congestion ControlClosed-Loop Congestion Control
OPEN-LOOP CONGESTION CONTROL (PREVENTION)Policies applied to prevent congestion before it happens. In this, Congestion control is usually handled either source or destination.
CLOSED-LOOP CONGESTION CONTROL (REMOVAL)Techniques applied to remove congestion after it occurs.In this, Congestion control is handled by source/destination or any other midway device.
Congestion control categories
161
OPEN-LOOP CONGESTION CONTROL
1-RETRANSMISSION POLICY Retransmission rules applied can be significant Long and short duration timers
2-WINDOW POLICY Selective Repeat versus Go-Back-N Window Constant and Variable Window sizes during a session Sliding window
162
OPEN-LOOP CONGESTION CONTROL Continued….
3-ACKNOWLEDGEMENT POLICY Imposed by the receiver Late acknowledgement prevent congestion
4- DICARDING POLICY Appropriate discarding policy may prevent congestion Less sensitive packet can be discarded without compromising the quality of transmission, e.g. Audio or Video transmissions
5-ADMISSION POLCIY Before admitting on the Network, check by the device of the resource requirement Quality of service issueAccess List issue/Filtering of Packets
Congestion control categories
164
CLOSED-LOOP CONGESTION CONTROL
1-BACK PRESSURE Congested router informs previous router to reduce rate
This process can be recursive all the way toward the source. So many router might be involved
Backpressure method for alleviating congestion
165
CLOSED-LOOP CONGESTION CONTROL
2-CHOKE PACKETIt is a packet sent by a Router back to the source to inform it of the congestion. Similar to ICMP source quench message
Choke packet
166
CLOSED-LOOP CONGESTION CONTROL3-IMPLICIT SIGNALLING
Source detects implicit signal regarding congestion. e.g. a mere delay in receiving an acknowledgement can be a sign of congestion ( Not usually the corruption in the packet) Like TCP Congestion control
4-EXPLICIT SIGNALLINGA router experiencing congestion, sends a signal to source or destination by setting a bit in the packet. (In Frame Relay)
167
CLOSED-LOOP CONGESTION CONTROL
Backward Signaling Setting a bit in the direction opposite to the congestion Source is informed to slow down Avoids congestion and discarding of packets
Forward Signaling Setting a bit in the direction of the congestion Destination is informed to apply a policy, e.g. appropriate delaying in the acknowledgements Avoids congestion and discarding of packets
24.168
TWO EXAMPLESTo better understand the concept of congestion To better understand the concept of congestion control, let us give two examples: one in TCP and control, let us give two examples: one in TCP and the other in Frame Relay.the other in Frame Relay.
Congestion Control in TCPCongestion Control in Frame Relay
Topics discussed in this section:Topics discussed in this section:
169
CONGETION CONTROL IN TCPBACKGROUND Any Network is combination of Networks and connecting devices A packet may pass through many Routers from source to destination If a router receives packet more rapidly than it can process, congestion may occur resulting drop of packet Dropped packets have no acknowledgements Sender must retransmit lost packet It may create more traffic, thus more congestion and more loss of packets As a result, system may collapseTCP ASSUMES THAT CAUSE OF LOST PACKETS IS DUE TO
CONGESTION IN THE NETWORK
170
TRAFFIC CONTROL MECHANISM IN TCP/IPWindowing
A method of controlling the amount of information transferred end to end
Information can be measured in terms of the number of packets or the number of bytes
TCP window sizes are variable during the lifetime of a connection.
Larger window sizes increase communication efficiency.
171
Simple WindowingTCP Full-duplex service: Independent Data Flows
TCP provides full-duplex service ( which means data can be flowing in each direction, independent of the other direction).
Window sizes, sequence numbers and acknowledgment numbers are independent of each other’s data flow.
Receiver sends acceptable window size to sender during each segment transmission (flow control) if too much data being sent, acceptable window size is reduced if more data can be handled, acceptable window size is increased
This is known as a Stop-and-Wait windowing protocol.
172
Sliding Window Protocol Sliding window algorithm is a method of flow control for network
data transfers using the receivers Window size.
The sender computes its usable window, which is how much data it can immediately send.
Over time, this sliding window moves to the rights, as the receiver acknowledges data.
The receiver sends acknowledgements as its TCP receive buffer empties.
.
Octets sent
Not ACKed
Usable Window
Can send ASAP
Working Window sizeUsable Window
Can send ASAP
Initial Window sizeSliding Windows
173
The terms used to describe the movement of the left and right edges of this sliding window are:
The left edge closes (moves to the right) when data is sent and acknowledged.
The right edge opens (moves to the right) allowing more data to be sent. This happens when the receiver acknowledges a certain number of bytes received.
The middle edge open (moves to the right) as data is sent, but not yet acknowledged.
Octets sent
Not ACKed
Usable Window
Can send ASAP
Working Window sizeUsable Window
Can send ASAP
Initial Window sizeSliding Windows
174
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1
23
Host A - Sender Host B - Receiver
Host B gives Host A a window size of 6 (octets or bytes). Host A begins by sending octets to Host B: octets 1, 2, and 3 and slides
it’s window over showing it has sent those 3 octets. Host A will not increase its usable window size by 3, until it receives an
Acknowledgement from Host B that it has received some or all of the octets.
Host B, not waiting for all of the 6 octets to arrive, after receiving the third octet sends an expectational Acknowledgement of “4” to Host A.
ACK 4Octets sent
Not ACKed
Usable Window
Can send ASAP
Window size = 6 Octets received
175
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
23
ACK 4
More sliding windows
45
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
ACK 6
Octets sent
Not ACKed
Usable Window
Can send ASAP
1 2 3 4 5 6 7 8 9 10
11
12
13
76
98
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1 2 3 4 5 6 7 8 9 10
11
12
13
1
Host B - ReceiverHost A - Sender
Window size = 6
176
ACKNOWLEDGMENT Positive acknowledgment
It requires a recipient to communicate with the source, sending back an acknowledgment message when it receives data.
Sender keeps a record of each data packet that it sends and expects an acknowledgment.
Sender also starts a timer when it sends a segment and retransmits a packet if timer expires before an acknowledgement arrives.
Acknowledgement is expectational
177
WINDOWING Windowing is the process in which a particular amount of data is allowed to be sent by the source before it receives an acknowledgement by the destination. Sender’s window size is determined by the receiver’s buffer space available.
HERE NETWORK IS TOTALLY IGNORED
Thus window size must be dependent on both the receiver and the Network
Receiver’s Capacity(Receiver Window size)
Congestion in Network(Congestion Window size)
ACTUAL WINDOW SIZE IS MINIMUM OF THE TWO
ACKNOWLEDGMENT
24.178
In the slow-start algorithm, the size of the congestion window increases exponentially until it reaches a threshold.
Note
24.179
In the congestion avoidance algorithm, the size of the congestion window increases additively until
congestion is detected.
Note
24.180
An implementation reacts to congestion detection in one of the following ways:❏ If detection is by time-out, a new slow start phase starts.
❏ If detection is by three ACKs, a new congestion avoidance phase starts.
Note
181
CONGESTION CONTROL (EXAMPLE-1)
SLOW START At beginning of connection, TCP sets congestion window size to one segment After each acknowledge, window size is doubled Process continues till one half of maximum window size. This point is called threshold Somewhat misleading because it does not seem slow start (Exponential)
ADDITIVE INCREASE This is introduced to avoid congestion before it happens Starts when window size reaches one half of maximum window size At threshold window size is increased by one segment against each Acknowledgement
This process continues till one of the following happens
1. No Acknowledgement is received and time out reaches
2. Congestion window reaches receiver window size
182
CONGESTION CONTROL (continued)
MULTIPLICATIVE DECREASE If congestion occurs window size is immediately decreased Congestion is sensed by the time-out in acknowledgement Due to development in transmission media(Noise free), there
are maximum chances of packet lost than packet corrupted When time-out occurs, threshold is set to one half of the last
congestion window size Congestion window starts from one again ( slow start phase)
Number of Acknowledgements
183
CONGESTION CONTROL (EXAMPLE-2)
FRAME RELAY Congestion in FR decreases throughput and increases Delay Whereas the goals of FR are reverse No flow control in FR and allows user bursty data Thus it has potential to face congestionCONGESTION CONTROL in FR is done through Two bits that explicitly warn the source and Destination of the congestion
Two bits are:1- FECN (Forward explicit congestion notification)2- BECN (Backward explicit congestion notification)
184
(EXAMPLE-2) ContinuedBECN Warns sender of the congestion in the network How sender is warned of the congestion
1- Switch using response frame from the destination2- Switch uses a predefined connection (DLCI 1023) to send special frame for this purpose
In response, the sender reduces the data rate
185
(EXAMPLE-2) ContinuedFECN Warns destination (Receiver) of the congestion in the network What receiver can do? FR assumes that Sender/Receiver use some flow control at higher layers; Like Acknowledgements at TCP Layers When receiver receive FECN bit active, it just starting delay in Acknowledgements forcing sender to slow down
186
(EXAMPLE-2) ContinuedFRAME REALY IN FULL DUPLEX
Four situations regarding congestion in Frame Relay can occur. FECN and BECN values are used as follows
QUALITY OF SERVICE
QoS DEFINITIONSomething FLOW seek to attain.
Allocation of appropriate enough resources for data of different applications Running through various links of a Network
Satisfactory fulfillment of Customers demand
FLOW CHARACTERISTICS
FLOW CHARACTERISTICS Continued..1-REALIILITY
Basic characteristics that flow requires
Low reliability means loss of Packet
Sensitivity of reliability varies from Application to Application
e.g. File transfer or Internet access require more reliability than telephony or audio conferencing
FLOW CHARACTERISTICS Continued..2-DELAY
Measured from source to destination and includes NICs, Propagation and Devices in-between
Tolerance level different for different applications
Real time traffic cannot afford delay but E-mail, file transfer, browsing etc can tolerate
FLOW CHARACTERISTICS Continued..3-JITTER
Variation in delay of packet belonging to same flow
Real time audio/video cannot tolerate Jitter
If first 3 packets face delay of 1 ms and 4th packet face a delay of 60 ms, it is unacceptable
Application that can afford delay and jitter, Transport Layer waits and rearranges packets before delivery to upper layers
FLOW CHARACTERISTICS Continued..4-BANDWIDTH
Varies among applications
High bandwidth is required for real time application
Throughput is measure of practical bandwidth
EXAMPLE Consider the example of a Routing protocol and
examine how it calculates the best flow path through which traffic maintains a steady flow.
A routing protocol configured in the router selects the best path to destination and routes the packet to appropriate interface
There are various Routing protocols1-RIP (Routing Information Protocol )2-OSPF (Open Shortest Path First )3-IGRP (Interior Gateway Routing Protocol )4-EIGRP (Enhanced Interior Gateway Routing Protocol 5- IS-IS (Intermediate system to intermediate system )
EXAMPLE Continued…A Routing protocol can find best flow path on
the basis of various attributes
HOP COUNTBANDWIDTHDELAYLOADREALIBILITY
EXAMPLE Continued… In example, let us consider, EIGRP (Cisco’s proprietary)
as the routing protocol. It considers BW, Delay, Load and Reliability to find best
traffic route By default, it considers BW and Delay It has three type of Tables stored in RAM of Router
1- Neighbor Table (Contains information from neighbors)2- Topology Table (All known routes to every destination)3- Routing Table (Best route to every destination)
Best route to any destination is the lowest cost metric
Metric Calculation (Review)
EIGRP
– k1 for bandwidth– k2 for load– k3 for delay– k4 and k5 for Reliability
Router(config-router)# metric weights tos k1 k2 k3 k4 k5bandwidth is in kbps
Displaying Interface Values
shows reliability as a fraction of 255, for example (higher is better):
rely 190/255 (or 74% reliability) rely 234/255 (or 92% reliability) rely 255/255 (or 100% reliability)
Router> show interface s0/0Serial0/0 is up, line protocol is up Hardware is QUICC Serial Description: Out to VERIO Internet address is 207.21.113.186/30 MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, rely 255/255, load 246/255 Encapsulation PPP, loopback not set Keepalive set (10 sec)<output omitted>
Bandwidth Delay
Reliability Load
shows load as a fraction of 255, for example (lower is better):
load 10/255 (or 3% loaded link) load 40/255 (or 16% loaded link) load 255/255 (or 100% loaded link)
Media
Bandwidth K= kilobits
BWEIGRP
10,000,000/Bandwidth
*256
Delay
DLYEIGRP Delay/10
*256
100M ATM 100,000K 25,600 100 S 2,560 Fast Ethernet 100,000K 25,600 100 S 2,560 FDDI 100,000K 25,600 100 S 2,560 HSSI 45,045K 56,832 20,000 S 512,000 16M Token Ring 16,000K 160,000 630 S 16,128 Ethernet 10,000K 256,000 1,000 S 25,600 T1 (Serial Default)
1,544K 1,657,856 20,000 S 512,000
512K 512K 4,999,936 20,000 S 512,000 DS0 64K 40,000,000 20,000 S 512,000 56K 56K 45,714,176 20,000 S 512,000 BWEIGRP and DLYEIGRP display values as sent in EIGRP updates and used in calculating the EIGRP metric.
EIGRP Metrics Values displayed in show interface commands and sent in routing updates.
Calculated values (cumulative) displayed in routing table (show ip route).
The Routing Table
Administrative Distance / Metric
How does SanJose2 calculate the cost for this route?
SanJose2#show ip routeD 192.168.72.0/24 [90/2172416] via 192.168.64.6, 00:28:26, Serial0
S0/0 192.168.64.2/30
S0/0 192.168.64.1/30
S0/1 192.168.64.6/30
S0/0 192.168.64.5/30
Fa0/0 192.168.72.1/24
Fa0/0 192.168.1.1/24
Fa0/0 192.168.1.2/24
EIGRP AS 100
Bandwidth = (10,000,000/bandwidth kbps) * 256
Westasman
SanJose1 SanJose2
Bandwidth = 1,657,856
Bandwidth = 25,600
Delay = 512,000
Delay = 2,560
Determining the costs
Fast Ethernet
= (10,000,000/100,000) * 256
= 25,600
T1
= (10,000,000/1544) * 256
= 1,657,856
S0/0 192.168.64.2/30
S0/0 192.168.64.1/30
S0/1 192.168.64.6/30
S0/0 192.168.64.5/30
Fa0/0 192.168.72.1/24
Fa0/0 192.168.1.1/24
Fa0/0 192.168.1.2/24
EIGRP AS 100
Delay = (delay/10) * 256
Westasman
SanJose1 SanJose2
Bandwidth = 1,657,856
Bandwidth = 25,600
Delay = 512,000
Delay = 2,560
Determining the costs
Fast Ethernet= (100/10) * 256
= 2,560
T1= (20,000/10) * 256
= 512,000
S0/0 192.168.64.2/30
S0/0 192.168.64.1/30
S0/1 192.168.64.6/30
S0/0 192.168.64.5/30
Fa0/0 192.168.72.1/24
Fa0/0 192.168.1.1/24
Fa0/0 192.168.1.2/24
EIGRP AS 100
What is the cost (metric) for 192.168.72.0/24 from SanJose2?
Westasman
SanJose1 SanJose2
Bandwidth = 1,657,856
Bandwidth = 25,600
Slowest!
Delay = 512,000
Delay = 2,560
1,657,856512,000
2,560 --------------2,172,416
Cost: Slowest bandwidth
+ sum of delays
The cost!
Determining the costs
bandwidth = (10,000,000/bandwidth kbps) * 256
delay = (delay/10) * 256
The Routing Table
Administrative Distance / MetricSanJose2#show ip routeD 192.168.72.0/24 [90/2172416] via 192.168.64.6, 00:28:26, Serial0
QoS IMPROVEMENTVarious technique are used to improve
QoS
1 Scheduling2 Traffic shaping3 Admission Control4 Resource Reservation
QoS IMPROVEMENT1 SCHEDULING
Packets arrive from various flow at Switch/Router for processing.
Scheduling treat every packets according to some rule or technique which improve QoS
Common scheduling techniques are
a) FIFO
b) PRIORITY QUEING
c) WEIGHTED FAIR QUEING
a)FIFO Packets are treated on first come first basis. If packet arrival rate is greater than packet processing rate, the queue is filled-up and soon packets start discarding.
b) PRIORITY QUEUING Better than FIFO in which packet are assigned priority classes Each priority has it own queue and highest priority is served first System continues till any queue is empty. This could lead to
STARVATION, the case in which continuous arrival of packets in high priority queue, so that low priority queue is never treated…… resulting in discarding of packets
QoS IMPROVEMENT
c) WEIGHTED FAIR QUEUING Still classful in which classes are assigned weights Packets are treated from each queue according to weight
QoS IMPROVEMENT
QoS IMPROVEMENTVarious technique are used to improve
QoS
1 Scheduling2 Traffic shaping3 Admission Control4 Resource Reservation
2 TRAFFIC SHAPINGControl over amount and rate of data sent by a source to the Network.Modify traffic at entrance points in the network Modify traffic in the routers
Enforce policies on “flows”
Two techniques of reshaping traffic are:
a) LEAKY BUCKET
b) TOKEN BUCKET
QoS IMPROVEMENT
QoS IMPROVEMENTLeaky Bucket
Across a single link, only allow packets across at a constant rate
Packets may be generated in a bursty manner, but after they pass through the leaky bucket, they enter the network evenly spaced
If all inputs enforce a leaky bucket, its easy to reason about the total resource demand on the rest of the system
QoS IMPROVEMENT
LeakyBucket
Output
Packets from input
Leaky Bucket: Analogy
QoS IMPROVEMENT A leaky bucket algorithm shapes bursty traffic into a fixed rate
traffic by averaging the data rate
The leaky bucket is a “traffic shaper”: It changes the characteristics of a packet stream
Traffic shaping makes the network more manageable and predictable
Usually the network tells the leaky bucket the rate at which it may send packets when a connection is established
QoS IMPROVEMENT
QoS IMPROVEMENT Token Bucket
Leaky Bucket: Doesn’t allow bursty transmissions
In some cases, we may want to allow short bursts of packets to enter the network without smoothing them out
For this purpose we use a token bucket, which is a modified leaky bucket
The bucket holds logical tokens instead of packets
Tokens are generated and placed into the token bucket at a constant rate
When a packet arrives at the token bucket, it is transmitted if there is a token available. Otherwise it is buffered until a token becomes available.
The token bucket holds a fixed number of tokens, so when it becomes full, subsequently generated tokens are discarded
Can still reason about total possible demand
output
Packets from input
Token Generator(Generates a tokenonce every T seconds)
Token Bucket
QoS IMPROVEMENT
QoS IMPROVEMENT
QoS IMPROVEMENTVarious technique are used to improve
QoS
1 Scheduling2 Traffic shaping3 Admission Control4 Resource Reservation
3 ADMISSION CONTROL Applied by Router or Switch
Device accept or reject a connection request based on predefined parameters (Flow specifications)
For example device first checks it buffer, link BW, CPU speed and previous commitments to other flows, and then decides whether to accept or reject the connection
Also requests concerning priority/urgency are checked and considered
QoS IMPROVEMENT
QoS IMPROVEMENTVarious technique are used to improve
QoS
1 Scheduling2 Traffic shaping3 Admission Control4 Resource Reservation
4 REOURCE RESERVATION
Data flow need resources e.g. buffer, CPU speed, protocols to run appropriate applications etc.
The proper allocation of these resources is called Resource Reservation
QoS IMPROVEMENT
QoS IN FRAME RELAY
Four attributes are related to QoS in Frame Relay1- ACCESS RATE 2- COMMITTED BURST SIZE 3- CIR 4- EXCESS BURST SIZE
ACCESS RATE Bit per secondsDepend on User’s channel capacity connected to Network e.g.
T1, E1 COMMITTED BURST SIZE (Bc)Maximum number of bits in a predefined period of time
committed by the Network without discarding any frameIf Bc of 4Mb is committed during a period of 4 seconds, the user
can end 4Mb data within 4 sec interval with guaranteed delivery.Note that data rate during the interval can vary.
QoS IN FRAME RELAY Continued…
COMMITTED INFORMATION RATE Average committed bit rate If user continuously send at this rate, the Network is committed to deliver
frames without any loss Rate can be higher or lower than CIR at different times If average sending bit rate is equal to/less than CIR, no frame will e discarded
CIR = Bc / TEXCESS BURST SIZE Maximum number of bits in excess of Bc that user can send during a
predefined period of time Network is committed to transfer this flow if there is no CONGESTION Thus this rate defines conditional commitment
QoS IN FRAME RELAY Continued…
QoS IN FRAME RELAY Continued…USER DATA TREATED BY FRAME RELAY
Traffic Optimisation In Cellular Networks
Some Cellular Bands
Standard Access Spectrum(MHz)
ChannelSpacing
PeakPower (W)
AMPS FDD 825-845 t870-890 r
30 kHz 3
GSM FDMA /TDMA
890-915 t935-960 r
200 kHz 0.8, 2, 5, 8
EGSM FDMA/TDMA/
FDD
880-915 t925-960 r
200 kHz 0.8, 2, 5, 8
DAMPSIS-136
FDMA/TDMA/
FDD
824-849 t869-894 r
30 kHz 0.8, 1, 2, 3
CDMAOne /CDMA2000
CDMA 824-849 t869-894 r
1.25MHz
0.125, 0.2,0.5, 2
WCDMA CDMA 1920-1980 t2110-2170 r
5 MHz 0.125, 0.25,0.5, 2
FDMA Frequency spectrum is divided up into channels
and shared Each channel is used by a single user Least spectrally efficient
Frequency
Time
TDMA Channels occupy cyclically repeating time intervals
or time slots DAMPS is 6 times more spectrally efficient than
FDMA, and GSM is 8 times more so
Frequency
Time
CDMA Each channel is assigned a unique code and occupies
the same frequency and time as other users Most prone to interference Maximum spectral efficiency
Frequency
Time
Same frequency; same time; different code
Dynamics of wireless communications 1G, 2G and 3G
technologies
Access has evolved from FDMA in 1G to FDMA/TDMA in 2G. For 3G, CDMA is the buzzword
Data speeds can be as high as 2Mb/s (stationary MS) in 3G systems
Evolution Paths To 3GIS-41 Core Network
GSM Core Network
2G 2.5G 2.75G 3G
Techniques for traffic optimisationFrequency Reuse
7 cell (cluster) reuse for Voice Channels
21 cell (3 clusters) reuse for Control Channels
Base stations in adjacent cells are assigned completely different channels
Frequency reuse is done to increase capacity
Co-channel and adjacent channel interference are the unwanted consequences of excessive reuse
Types of Interference_Co-channel Interference interference between signals from Cochannel cells is called co-channel interference.
Unlike thermal noise which can be overcome by increasing the signal-to noise ration (SNR), co-channel interference cannot be combated by simply increasing the carrier power of a transmitter .This is because an increase in carrier transmit power increases the interference to neighboring co-channel cells.
Co-channel interference Control Channel interference
distance is 21 channels and hence less interference
Voice Channel interference more adjacent channel interference
To reduce co-channel interference, co-channel cells must be physically separated by a minimum distance to provide sufficient isolation due to propagation.
Types of Interference_Co-channel InterferenceInterference is reduced from improved
isolation of RF energy from the co-channel cell.
The parameter Q, called the cochannel reuse ratio, is related to the cluster size.
Types of Interference_Co-channel Interference
Interference between co-channel cellsco-channel cells must have a minimum
separationQ =D/R= (3N)1/2;
D/R ratio (co-channel reuse factor)Frequency planning is necessary
More channels per cell,more system capacity,more co-channel interference
Fewer channels per cell,less system capacity,less co-channel interference
A small value of Q provides larger capacity since the cluster size N is small, whereas a large value of Q improves the transmission quality, due to a smaller level of co-channel interference.
A trade-off must be made between these two objectives in actual cellular design.
Types of Interference_Co-channel Interference
Interference from signals adjacent to desired signal
imperfection in Rx filter design allowing nearby frequencies to leak into the passband
can be serious if adjacent channel user is very close to an MS trying to receive a base station signal - near far effect
near-far effect can be caused by nearby transmitter not necessarily belonging to the cellular system
Types of Interference_ Adjacent channel interference
Techniques for traffic optimisation_Sectoringa technique whereby cell radius is kept the same, but it
is divided into smaller directional sectors
traffic carrying capacity is increased by bringing in more channels
co-channel interference is reduced
D/R ratio (co-channel reuse factor) is decreased
SIR increases
Techniques for traffic optimisation_Sectoring
sectorisation is usually for 120o transmission pattern 60o transmission pattern
down tilting of sector antennas
1
2
3
12
3
6
54
32
7
16
4
55
55
5
55
32
7
16
455
55
5
55
Depiction of how interference is reduced by sectoring
1200 600
Techniques for traffic optimisation_Sectoring
• Subdivide cells into sectors, usually 3 (each sector is 120°) or 6 (each sector is 60°)
• Less Tx power as smaller area is covered
• Each sector is served by directional antenna and different frequencies
• Directional antennas reduce co-channel interference smaller clusters, higher capacity
12
3
1 23
65 4
Techniques for traffic optimization_ Cell splitting
congested cells are divided into smaller cells (microcells)
each smaller cell becomes an independent cell with its own base station
increases capacity by increasing channel reuse transmit power is reduced to avoid interference
12
34
67
5 12
34
67
5(4)
(3)(1)
(7)(5)
(2)
(6) Cell splitting of cell 4 while preserving the frequency reuse plan
Traffic carrying capability Radio spectrums are limited
Large number of users can be accommodated in the limited spectrum
GSM-900 has 125 and GSM 1800 has 375 physical channels
CDMA IS-95 has 10 channels. 12.5 MHz band with channels having bandwidth of 1.25 MHz.
Trunking or use of channel pool as needed accounts for efficient spectrum utilization
Traffic carrying capability in GSMTDMA allows one carrier to Tx data to an I/O device for multiple users Each carrier has a multiframe of 120 ms A multiframe contains 26 frames (4.615 ms)
Frame 13 is used for signalling (SACCH). Information about signal strength in neighbouring cells Frame 26 is unused 24 remaining frames are used for voice and data
A frame consists of 8 bursts (0.577 ms)
Conversation or data is carried in bursts
Multiframe 120 ms
Frame 4.615 ms
Burst 0.577 ms
1 2 3 4 26
1 2 ………………… 8
T Data F Train F Data T Guard 3 57 bits 1 26 1 57 bits 3 8.25
Traffic carrying capability in GSM GSM uses FDMA and TDMA to offer greater compression than
DAMPS. Upto 8 simultaneous conversations may be carried out CDMA is the most spectrally efficient technology. IS 95 CDMA
uses a single channel of 1.25 MHz to carry entire traffic load for one or more base stations
The same channel may be used in adjacent cells and for split up and sectorised cells to increase traffic handling capacity
Soft handoff is employed whenever neighboring cells use the same frequency as the reference cell
Traffic carrying capability in CDMA
CDMA is not bandwidth limited, but is interference limited
Increasing simultaneous conversations within a cell increases interference, and decreases channel throughput
At busy hour, QoS is at its minimum, whereas at non busy hours, there is enhanced service quality
Most prone to interference, but there are ways to counter interference problems such as precise power control on control channel and voice channels
247
CDMA (CDMA2000 1x) as WLLMost widely deployed WLL solution in the
worldHigh spectral efficiency to handle
Wireline like trafficData capability inherent in system (up to
144kbps)Backward and forward compatibilityAvailable in 450, 800 and 1900MHz
248
CDMA Channel or CDMA Carrier or CDMA Frequency
Duplex channel made of two 1.25 MHz-wide bands of electromagnetic spectrum, one for Base Station to Mobile Station communication (called the FORWARD LINK or the DOWNLINK) and another for Mobile Station to Base Station communication (called the REVERSE LINK or the UPLINK)
In 800 Cellular these two simplex 1.25 MHz bands are 45 MHz apart
In 1900 MHz they are 80 MHz apart CDMA Forward Channel
1.25 MHz Forward Link CDMA Reverse Channel
1.25 MHz Reverse Link
45 or 80 MHz
CDMA CHANNELCDMA
ReverseChannel 1.25
MHz
CDMAForward
Channel 1.25 MHz
249
CDMA 2000
250
CDMA 2000 Platforms
CDMA2000-1x(1xRTT)
CDMA2000-1xEV-DO
CDMA2000-1xEV-DV
CDMA2000-3X(3xRTT)
251
CDMA 2000 1x (1x RTT)
252
CDMA 2000 1xEV-DO
253
CDMA 2000 1xEV-DO
254
CDMA 2000 1xEV-DV
255
CDMA 2000 3xRTT
256
CDMA2000 Radio Configurations
257
Rate Sets A Rate Set is a set of Traffic Channel frame formats.
A Rate Set may carry voice, user data, or signaling.
Two Rate Sets are defined for use in cdma One systems. All services provided over the air interface must conform to one of these two rate sets: Rate Set 1 — supports a maximum of 8550 bps, with
an additional 1050 bits of overhead for a total max rate of 9600 bps.
Rate Set 2 — supports a maximum of 13,300 bps, with additional overhead bringing the total transmission rate to 14,400 bps maximum.
258
Radio Configurations- Forward Link
Orthogonal Transmit Diversity splits transmitted symbols into two streams with each stream being transmitted on an antenna
259
Radio Configurations- Forward Link
260
Radio Configurations- Reverse Link
261
Spreading Rate (SR1) & Spreading Rate ( SR3)
262
Spreading Rate (SR1) And Spreading Rate ( SR3)
263
Spreading Rate (SR1) also called 1x
264
Spreading Rate (SR3) also called 3x or MC ( Multi-Carrier)