qosonhspareasearch
TRANSCRIPT
-
7/31/2019 QoSonHSPAReasearch
1/8
QoS Load Differentiation Application in a UTRAN
Live NetworkBeatriz Garriga, Francisco Dominguez, Clara Serrano, Santiago Tenorio, Elia Asensio
(1)
Access NW Group Competence Centre(1)
Radio and Tx Engineering SpainVodafone Technology
Madrid, Spain
Abstract Quality of Service (QoS) Load Differentiation
mechanisms allow a network to allocate resources to users in
situations of capacity constraints depending on their priority.
The need to apply QoS mechanisms is driven by the increased
demand for finite resources and in the case of cellular networks
this has been exacerbated largely due to HSPA data traffic
growth. This paper describes results of a QoS investigation on a
real live UTRAN network with a significant number of real users.
The results presented show improvements in terms of throughputand delay. The trial led to the identification and confirmation of
an optimum proportion of highest class users in a network
under various traffic conditions initially via an HSPA traffic
model then via real traffic profiles in the live network. This
exercise has led to the identification and proposal of two new
algorithms and are outlined in this paper: Iub congestion control
is improved to prioritize users per node B (inter-cell) and a new
scheduler algorithm is proposed to take into account the
users/services delay depending on their priority.
Keywords-component; QoS; HSPA; ARP; THP; TC;RRP;SPI;
Throughput; Delay: Traffic Model
I. BACKGROUND
The combination of 3G data growth and technologyevolution offering high peak rates has brought about the needfor mechanisms that guarantee efficient resource handlingwhilst managing the resultant increasing data traffic.
Where network capacity is limited, QoS (Quality ofService) mechanisms permit the ability to differentiate usersand their subsequent resource allocation such that all customersreceive a minimum performance in terms of throughput anddelay according to their QoS profile (subscription).
Traffic profiles across internet are largely heterogeneous
e.g. with respect to volumes of data per customer there is avery uneven balance between light / normal users (web
browsing, e-mail, picture upload and download, i-tunes) andheavy users (video streaming and peer2peer). Left uncheckedthis typical situation could result in disproportionate usage ofresources by a small number of customers to the detriment ofthe remainder therefore a mechanism to prevent heavy userscannibalising available resources is required to ensure allcustomers needs are met.
The Study and results presented in this document focuseson two types of HSPA traffic: Interactive and Background,
and whilst QoS is an end-to-end concept only QoS mechanismsin the UTRAN and its application in a real network are studied.
II. QOS PARAMETERS
The QoS concept and architecture for UMTS networks isspecified in [1]. QoS priorities to segment classes of customerare provisioned per user and APN in the HLR. The key QoS
parameters used are summarized in Table 1
Traffic Class ConversationalClass
StreamingClass
InteractiveClass
BackgroundClass
Transferdelay (ms)
100-max 280-max
GBR (bps)
-
7/31/2019 QoSonHSPAReasearch
2/8
The numbers representing RRP are such that the lowernumbers correspond to those users with higher priority. Therange of RRP within UTRAN is restricted to 1-15 in order tocorrespond with and facilitate mapping with the QoS
parameters as specified by 3GPP (see Table 1). Grouping oftraffic classes over a range of ARP and THP values is
permitted.
Best effort services i.e. Interactive & Background classeswere the primary focus of the studies in this paper.
III. QOSALGORITHMS
The introduction of HSPA in UMTS where significantlyhigher peak rates and latency are possible has brought aboutthe situation where increasingly the bulk of the data trafficwithin UMTS systems is delivered via HSPA.
A. Radio Scheduler and Flow Control1) HSDPA
One of the main characteristics of HSDPA is the usage of ashared downlink channel amongst several users. Access to thisshared channel is controlled by the HSDPA scheduler - locatedin the Node B and based on QoS demands, radio channelquality, and data request; the scheduler selects the UE andassigns transmission resources accordingly.
Whilst there are various schedulers e.g. maximum SIR,round-robin, proportional fair and other variants whichtake into account other factors like delay constraints. All ofthem aim to perform a trade-off between the maximum cellthroughput, application quality and user fairness.
Within all schedulers queuing of data packets is performedand 3GPP [3] defines a Scheduling Priority Indication (SPI) to
allocate priorities within the scheduler. Within each of theseSPI allocations a weight factor provides relative priority
between the different users i.e. services in use. These packetsare classified prior to onwards transmission via what is called aWeighted Queuing Factor. The SPI values are obtained in asimilar way to the RRP parameters given above in Section 2.
For example, the most used scheduler mechanism today isthe proportional fair, which is defined with the next formula.
)(
)()(
k
kRkM
= (1)
M(k) is the priority of the user k.
R(k) is the average instantaneous rate
(k) is the average data rate for user k.
With the QoS differentiation, there is a weighting factor forevery user depending on the priority.
)(_)(
)()( kSPIWeightSPI
k
kRkM ==
(2)
In this way, it is possible to utilise the weighting factors andthus manage the throughput allocated to every user.
Capacity Request
Capacity Allocation
SRNC Node B
Figure 1. Flow control algorithm in HSDPA.
HSDPA was defined by 3GPP such that the scheduler islocated in the NodeB, therefore any flow control mechanismover the Iub interface between the RNC and NodeB requiresthe RNC to signal the NodeB as to the ascertain the capacitystatus/constraints of that interface.
The RNC signals the Capacity Request message to theNode B and the Node B then allocates the capacity per user viathe Capacity Allocation response message - see [4]. Thenext figure illustrates how the algorithm performs.
As the radio scheduler in the Node B allocates more traffic
to the users with more weight, this is translated to the flowcontrol of the Iub giving more data to transmit over the radiointerface to the most priority users.
2) HSUPALike HSDPA in the downlink, HSUPA (or Enhanced
Dedicated Channel (E-DCH)) was defined by 3GPP to providehigh bit rates in the uplink in UMTS.
The HSUPA scheduler also located in the NodeB -decides on when and with which bit rate each every UE is
permitted to transmit to the NodeB in that cell. Each receivedMAC-e PDU (Protocol Data Unit) is placed in a frame protocol
data frame and sent to the SRNC (Serving Radio NetworkController) - in some cases several PDUs are bundled into thesame data frame. For each data frame, the Node B inserts thefollowing information in the corresponding frame header:
- A reference time, that gives an indication on when theframe was sent.
- A sequence number that gives an indication on whichframe this is in relation to other data frames.
At the reception of the data frames the SRNC can do thefollowing:
- With the use of the reference time, the SRNC can
compare the relative reception time with therelative transmission time (the reference time
included in the data frame). With that
information the SRNC can detect if there is a
delay build-up in the transmission path. A delay
build-up is an indication on that frames are being
queued due to overload in the transport network.
- With the use of the sequence number, the SRNC candetect a frame loss. A frame loss is an indication
that packets have been lost in the transport
network due to overload reasons.
-
7/31/2019 QoSonHSPAReasearch
3/8
In the Figure 2. it is shown the procedure used by theSRNC to signal that a transport network congestion situationon Iub/Iur has been detected. This is described in detail in [4].
SRNC
TNL CONGESTION INDICATION
Node B
Figure 2. Flow control algorithm in HSUPA.
B. Traffic separation at transport layerThe Iub interface was initially defined to use ATM as the
transport mechanism and later IP was introduced. In case of(transport) network congestion lost or delayed packets willresult. Where no indication is given, a congested network willimpact upon all PS traffic and consequently both ATM and IPwere defined to include some traffic management handling.
1) ATM caseWhere ATM transport is employed traffic handling is
performed either via the ATM protocol itself or via anotherrelated protocol - the ATM Adaptation Layer (AAL2) - toseparate the traffic.
If AAL2 is used in this manner, prioritisation and weightingis permitted and can be used in conjunction with the schedulerto permit flow control. In this way, it is possible to haveconsistent behaviour in all parts of the network.
If ATM is used in this manner various (Operatorconfigurable) classes of service are available: CBR, rtVBR,nrtVBR, UBR or UBR+.
2) IP caseWhere IP Transport is employed, 2 possibilities of traffic
management exist: differentiation at IP Layer 3, or at Layer 2.
In case of traffic management at Layer 2, three priority bitswere defined by IEEE (802.1p) for traffic differentiation in theVLANs. In this case, every packet of the RLC/MAC of UMTSis marked in the Ethernet layer by the RNC/Node B to be
prioritised in the transport network. In the live network thetransport equipment managing Ethernet will prioritise the flowsaccording to the IEEE standard and the current state of the art.
Priority Level Traffic Type0 (lowest) Best Effort
1 Background2 Standard (Spare)
3 Excellent Load4 Controlled Load (Streaming
Multimedia)5 Video [Less than 100ms latency
and jitter]6 Voice Traffic [Less than 10ms
latency and jitter]7 (highest) Reserved Traffic [Lowest latency
and jitter]
Table 3: Priority level of 802.1p
In case of traffic management at Layer 3, the IETFstandardisation body defines DiffServ (Differentiated Services)to provide quality of service and service differentiation withinIP networks.
DiffServ offers hop-by-hop (PHB Per Hop Behaviour)differentiated treatment of the packets based on the informationcontained in every packet, thus avoiding per-flow specific stateand signalling at every hop. Similarly to the Layer 2mechanism outlined above, the mapping between the input
parameters and the DiffServ code is operator configurable.
C. Call Admission Control algorithmsWhereupon a user is admitted in the network i.e. granted
service, it is important to allocate the user a prioritycorresponding to the constraints of the service which itrequires. With respect to PS services, this usually means theRNC reserves the necessary bandwidth for the servicerequested. If no resources are available, admission is not
permitted i.e. there could be a reject, unless a pre-emption istriggered i.e. dropping another user with less priority.
1) Bandwidth reservation algorithmApart from the services with a guaranteed bit rate (GBR)already allocated and therefore are given high priority, theremaining PS traffic does not need such a guarantee and aredescribed as best effort services.
But even in this case, it is necessary to reserve someamount of bandwidth to ensure that the number of usersentering in the system still exceeds the resources available.This bandwidth reservation depends upon the priority of theuser/application and therefore mapping of Input parameters(services requested) to a BW reservation is performed.
This BW reservation can be done in different parts of the3G network: Codes of the OVSF code tree, Iub TransportResources, (NodeB) power in the cell reserved, baseband
processing, i.e. hardware reservation in the RNC and the NodeB.
2) Pre-emption possibilitiesWhen a new call is received, if the required resources are
not available then the call will be rejected. However, if thegiven priority of the user is higher than some of the alreadyadmitted calls, then a pre-emption algorithm may be employedto ensure the higher priority user is admitted.
The pre-emption capabilities are based in the ARPparameter as defined in the 3GPP standard (see [2]).
Priority level spare (0), highest (1), .., lowest (14),no priority (15)
Pre-emption Capability shall not trigger pre-emption, maytrigger pre-emption
Pre-emption Vulnerability not pre-emptable, pre-emptable
Queuing Allowed queuing not allowed, queuing allowed
Table 4: ARP definition in 3GPP
If the Radio Access Bearer (RAB) has a pre-emptioncapability of may trigger pre-emption, then correspondinglythe algorithm may be initiated depending upon Operator
preference. The system checks for another RAB with less
-
7/31/2019 QoSonHSPAReasearch
4/8
priority and also where the Pre-emption Vulnerability is setto pre-emptable. In the case that there is no less priority RABto make the pre-emption, it is also permitted that the RAB be
put in a queue i.e. waiting for resources to be released in orderto proceed. In this case it is necessary to signal QueuingAllowed contained within the ARP Parameter within the RABdescription.
IV. LIVENETWORKRESULTS
In this section the assessment in a real network (VodafoneSpain real 3G network with more than 6 million 3G terminals)of QoS algorithms is presented. About one thousand FriendlyUsers were sourced around Madrid and Malaga being realcustomers of the operator (The Trial lasted for 4 weeks whereQoS Differentiation was and was not applied. 32% of theFriendly Users downloaded more than 1GB).
3 kind of friendly users were defined: Gold users with very
high priority, Silver users with normal priority and BronzeUsers with low priority.
A. Throughput results:Where QoS Differentiation was applied an improvement
occurred on average user throughput for Gold and Silver (27%and 10% respectively), at the expense of the Bronze class users( -28%), see Figure 3.
These statistics were obtained by averaging the 4maximums Busy Hour in each week during the trial where thefriendly users were located. The performance differentiationrefers to the period where congestion occurred, at all othertimes i.e. non-congestion periods, all QoS profiles get all
bandwidth required without limitation.
anticipating surveys results
26.67%
9.94%
-27.50%
-30
-20
-10
0
10
20
30
%
Gold Silver Bronze
Figure 3. Average Throughput variation per kind of user .
Extracting the network statistics for one day within the trialperiod - Figure 4. below - the throughput distribution of acongested cell was obtained. During the period of 1200 > 1900it can be seen that the bulk of the traffic comes from the Goldusers (yellow line) and it can also be seen that during this
period maximum capacity of the Iub is reached: approximately4.3 Mbps at application layer with 3 E1 for the Gold users
(Bronze users got less than 500 Kbps and for Silver users itvaried between 500 Kbps and 1.5 Mbps.)
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5:00:00 7:00:00 9:00:00 11:00:00 13:00:00 15:00:00 17:00:00 19:00:00
kbps
0
10
20
30
40
50
60
70
80
90
100
%
VS.HSDPA.MeanGoldenBeChThroughput VS.HSDPA.MeanSilverBeChThroughput
VS.HSDPA.MeanCopperBeChThroughput % HSDPA Codes Used
% Iub Utilisation
Figure 4. Throughput per type of user during one day.
B. Delay results:QoS Load differentiation provides high priority users with a
better relative performance also in terms of Delay and not justin throughput an important improvement for delay sensitiveinteractive applications such as HTTP, Mail, Gaming, etc.
2,4 2,4
3,4
5,7
9,3
2,4
0
1
2
3
4
5
6
7
8
9
10
Gold Silver Bronze
Timeforwebpagetodownload(sec)
w/o Load
Figure 5. Impact of QoS activation on web browsing (seconds to dowload)
The exercise attempted to verify a smart management ofthe queues in all the congested interfaces such that packetsfrom high priority users are placed in the higher priority queue,resulting in better responsiveness of the system, andcorrespondingly for Silver and Bronze users.
In theFigure 5. , the time to download a typical web page
(approx 221KB) during a period of congestion can be seen tosimilar to the same exercise when the network was not in acongested state Gold users. For Silver and in particular forBronze the congestion is more apparent.
Measurements were done in the most loaded Node B in thetrial scenario during instants of full occupancy in Iub, with aconstant size web page of 221 KB.
-
7/31/2019 QoSonHSPAReasearch
5/8
V. ANAYLIS OF OPTIMUM PROPORTION OF HIGH CLASSUSERS IN THE NETWORK
If all users in a cell are defined with the highest prioritythen no such user differentiation is then possible and is lost.Therefore, the proportion of users that are allocated to Gold,Silver and Bronze categories respectively is required to bestudied in order to obtain this optimum high class user
proportion in the network, and also over a range of traffic/loadconditions. To do this, a model of the HSPA traffic is
performed see below.
A. HSPA Traffic ModelTaking into account the notion of self-similarity that takes
place in HSPA networks, due to the high variability ofburstiness of the multiservice traffic, the chosen model formodelling HSPA traffic is M/Pareto [5] and [6].
The M/Pareto model is basically a Poisson process withrate of Pareto-distributed overlapping bursts [7]. Each burst,from the time of its arrival, will continue for a random period.
During that random period, the data rate during a burst isconstant. Giving rto be its rate, the period that represents thelength of the burst has a Pareto distribution. The probabilitythat a Pareto-distributed random variableXexceeds thresholdxis
{ }
=>
otherwise
xx
x
,1
,
Pr
X
(3)
where X is the burst duration, 1 < < 2 is the shape parameter
and > 0 is called the location parameter (minimum x value,
minimum burst duration). The mean ofXis
[ ] ( ) =
= Xvar,1
E
X (4)
and its variance is infinite. Thus, the mean number of bytes
within one burst, BS, is
1=
rBS (5)
the mean amount of Traffic arriving within an interval of
length t, A (t), in the M/Pareto traffic model is
[ ] ( )1
A(t)E
=
rt (6)
and the variance of the traffic arriving in an interval of length t
for this M/Pareto model is
[ ]
( )( )( )
( )( )
( )( )( )
>
+
+
+
=
tt
t
r
tt
tr
,321
21
1
1
1
2
1
321
1
22
1
3
12
0,1
11
22
A(t)Var
3
2
32
22
(7)
With M/Pareto mean E[A(t)] and variance Var[A(t)] HSPATraffic can be modelled and contention studied.
B. Contention in HSPA networkAs described previously, M/Pareto depends on four
parameters,
-,: characterize the period that represents the length of
the burst (Pareto distribution).- : characterize the arrival burst process (Poisson
distribution).
- r: mean data-rate during a burst.
Characterizing these four parameters for HSDPA traffic anApplication Data Traffic tool was used across the Iu interfacewithin the Vodafone Spain network during the period ofJune08 and only for HSPA connections.
First, the distribution of the typically observed burst size onthe HSPA network was collected. Utilising this data, we assignthe shape () and location () parameters in order to obtain theminimum burst duration and mean burst duration values from
the data traffic collected (with an estimated mean data-rate r of600Kbps).Pareto- distributed bursts are generated using theformula below:
1
U
X=
(8)
where U is a random variable drawn from the uniformdistribution on the unit interval (0;1).
Figure 6. shows the real HSPA network burst sizedistribution and the Pareto burst size distribution characterizedfor the HSPA traffic with the previous parameters. Bothdistributions are similar such that the length of the bursts of the
HSPA real network aligns with the Pareto-distribution.
72%
21%
7%
0,05%
75%
13%
7%4%
0%
10%
20%
30%
40%
50%
60%
70%
80%
1KB 10KB 100K 1MB and
more
Burst size
%onHSPANetwork
Pareto Distribution
Real NW Distribution
Figure 6. Burst size distribution in HSPA network.
controls the regularity with which new sessionscommence (the bursts arrivals per second) i.e. the mean cellrate. As increases the traffic in the cell will also increase (seeFigure 7. ). Increasing may be considered to represent either
-
7/31/2019 QoSonHSPAReasearch
6/8
an increase in the level of activity of individual sources, or anincrease in the number of sources contributing to a stream [8].
0,31,3
2,7
4,0
5,4
6,7
0
1
2
3
4
5
6
7
8
100 500 1000 1500 2000 2500
Burst Arrival Rate (: bursts/s)
MeanC
ellRate(Gbytes/h)
Figure 7. Mean Cell Rate versus Burst Arrival Rate (bursts/s) .
Having obtained the M/Pareto parameters, time of Users incontention can be studied. Taking into account M/Pareto isgiven in terms of cumulative distribution function (CDF),
-probability of no contention pNC (probability of nohaving traffic or only one user transmitting with data
rate r):);,,)((1 rtAFpNC == (9)
-probability of contentionpc(nc) (probability of having ncusers transmitting at the same time with data rate r each
one):
2
),,)((),),1()(()(
===
c
cccC
nfor
nrtAFnrtAFnp
(10)
Figure 8. Percentage of time of Users in contention in HSPA network . a)
M/Pareto Model, b) Real network.
Figure 8. shows probability of having nc users in contentionpc(nc) with a given traffic. Figure 8. a) shows the contention inthe M/Pareto Traffic Model andFigure 8. b) in the real HSPATraffic collected. It can be seen that the contention in bothgraphs is very similar, thus it can be concluded that HSPAtraffic modelling and its contention has been performedconsistently.
C. Contention between High Class usersThe concept of QoS Load differentiation is based on an
intelligent allocation of resources in congested conditionsdepending on User Class. However the premium for high classusers (called in this paper as Gold users) wont fully
materialize if their relative proportion is high enough to bringabout Gold to Gold contention prevalent i.e. too many Goldusers will bring the situation where they themselves becomecompetitors to each other for resources. Calculating theoptimum proportion of Gold, Silver and Bronze users in theHSPA network requires the identification of when competition
between Gold users occurs i.e. when being a Gold user is nolonger advantageous. For a given traffic and proportion of Gold
users (%Gold), the probability of two or more Gold Users incontention is
=
=2
),%,2()()(cn
cgCc GoldnnpnpContentionGoldGoldp
(11)where pc(nc) is calculated from the M/Pareto Traffic Model as
seen in the previous section, and probability of having two or
more gold users is
=
=
=
1
0
),%,(1),%,2(g
g
n
n
cgcg GoldnnpGoldnnp
(12)
where
( ) ( )gcg nnn
g
c
cg GoldGoldn
n
Goldnnp
= %1%),%,(
(13)
with nc as the number of users in contention and ng number ofGold Users.
Figure 9. shows the probability of having Gold-Goldcontention for different Cell load (Mean Cell Rate per hour). Anumber of M/Pareto models were analysed i.e. varying the cellload by increasing the level of multiplexing (represented by the
burst arrival rate ). In the figure below, the blue line is theprobability when the proportion of Gold users is 10% and thered one for a proportion of 30%.
Figure 9. Time of Gold Gold contention for different Cell load.
3%
0%
10%
20%
30%
40%
50%
60%
70%
0 1.000 2.000 3.000 4.000 5.000 6.000
Mbyte / hour per cell
%T
imeofGold-GoldContentio
10% Gold Users
30% Gold Users
Typical work area
Max. sustainable
Gold - Goldcontention rate
5%
16%
74%
2%
21%
74%
4%1% 0,1%
No contention
2 Users
3 Users
4 Users
5 Users
and more
-
7/31/2019 QoSonHSPAReasearch
7/8
Results show that maximum Gold population must beproportionately below ~10% for this premium service not todeteriorate beyond 25% on a typically loaded network(between 500kbyte/hour and 4Gbyte/hour).
If the proportion of Gold users reaches 30%, Gold toGold contention will happen more than 50% of time on aloaded network thereby negating the premium service.
VI. NEW ALGORITHMS TO IMPROVE QOSBEHAVIOURPROPOSED IN THIS PAPER
Iub performance in congestion situations particularly with
reference to high priority users can be improved with the
introduction of new QoS algorithms.Delay sensitive applications (e.g gaming) require fast
interaction. Users differentiation providing high priority ones
with better performance in terms of delay.
A. Iub congestion PrioritizationAs resource visibility between the Radio Network Layer
(RNL) and the Transport Network Layer (TNL) is not always
possible, this leads to difficulties in applying QoS techniquesi.e. whilst the scheduler in the NodeB is able to allocatedifferent priorities, that same NodeB manages normally 3 cellswith each cell using a different radio scheduler but alltransported over the same shared Iub. Given that thedifferentiation is done per cell, there will be no fair QoSdifferentiation between users of different cells.
For example: there are 3 users in a cell, with high, mediumand low priorities. In that cell, the scheduler will give more
priority to the high priority user than the remaining mediumand low priority user. Under the same radio conditions, thehigh priority user will have more throughput than the rest inthat cell. But, if there arrives a new user (low priority) in
another cell of the same Node B, the scheduler will give it themaximum possible bit rate as it is the only user in that cell -which is more than the high priority user of the other cellthereby again negating the value of being a high priority user.
To solve this problem, it is proposed that a new algorithmwhich introduces QoS differentiation parameters andalgorithms into the Flow Control procedure as a unique entityin the Node B managing all the users regardless of which cellthat user is in and the resources available to that cell.
In case of congestion in the Iub, the Capacity Allocationprocedure can be used to manage the bit rate of the users.When the congestion situation has been detected, it is possibleto include the same Weighting factor as applied in the
scheduler in order to give the same QoS differentiationbehaviour in case of radio or Iub congestion. In the capacityallocation message, every user has a number of creditscalculated by the Node B. The credits of each user j,CreditsUserj, are calculated according to the followingformula:
MaxBWUsersKNumberWeightKSPI
UserWeightSPIrCreditsUse
K
j
j
=
____
(14)
Where SPI_Weight_Userj is the priority weight for the userj, and the denominator is the calculation of the total Weights ofall the users, and
MaxBW =CurrentUsedBW MarginDecreasedBW (15)
where CurrentUsedBWis the Iub occupation level at whichthe congestion has been detected andMarginDecreasedBWis aconfigurable parameter to reduce the maximum BW used in
case of congestion.
B. HSDPA scheduler: QoS differentiation based on delayThe algorithm used in the current HSDPA networks is the
weighted proportional fair. In this algorithm the scheduling
priority of a user is calculated with the equation (2).
The problem of the current schedulers is that they do nottake into account the delay of the arriving packets for
subsequent prioritisation. There are some applications e.g.
online gaming that require fast interaction, very low RTT
whilst other applications e.g. FTP that do not place such
reactive demands upon the system.
Proposed new algorithms that take into account the delay
(like for example the proposed in [9]), are difficult andcomplex to implement in an efficient way in the current
Hardware platforms.The new proposal here modifies the SPI_Weight to take
into account the delay of the packets at the scheduler fullyconfigurable by the operator but done in a simple way to havethe control of the scheduler behaviour and being easy toimplement.
So, we define the SPI_Weight as:
SPI_Weight =
SPI_Weight0[i] if packetsDelay < T0[i]
SPI_Weight1[i] if T0[i] < packetsDelay < T
1[i]
SPI_Weight2[i] if T1[i] < packetsDelay
(16)
The different SPI_Weight are integer values to define the
relative priority between users.
The Tj[i] are defined in ms.
i has 16 values as defined in the standards for the 16 possible
SPI values.
In the next figure, it can be seen a simple example with 3different priority users.
-10
0
10
20
30
40
50
60
70
80
0 5 10 15 20 25 30 35
delay (ms)
SPIweight
BRONCE SILVER GOLD
Figure 10. Example of parameter of SPI_weight.
-
7/31/2019 QoSonHSPAReasearch
8/8
In this way, it can be defined the initial weight of a user
when there is no delay (so when the packets arrive to the
scheduler), and then the delay after T0 ms. This T0 ms servesto control what is the minimum acceptable delay for these
kind of packets. T1 parameter is useful to define a maximum
delay acceptable from which the priority is very high to be
able to send the packet as soon as possible. Note that it would
be possible to assign priority 0 to a very low priority service
until the packets have been in the queue during some ms,thereby freeing up resources to the most important services of
the network.
VII. CONCLUSIONS
This paper presents the results of a QoS Load Differentiation
trial performed in a real network environment with up to a
thousand customers using a mix of real life mobile broadband
applications.
Technical analysis shows a significant improvement on
average throughput for Gold and Silver users during instants of
contention (27% and 10% respectively) at the cost of a
decrease for Bronze class users (-28% decrease).
A significant reduction in latency was also obtained for Gold
customers when under contention, in the range of ~32..44%
with respect to QoS differentiation not applied. This may be a
very significant improvement for the customer when usingdelay sensitive applications.
In addition, sensitivity of attained differentiation to the
customer class mix was analyzed, and a practical
recommendation concluded as for the proportion of Gold users
to be kept below ~10% in order to preserve the premiumservice within given limits.
Finally, two enhanced algorithms where presented, aiming to
better guarantee QoS Load differentiation in scenarios of
congested Iub and to further reduce high priority users delay.
The foundations for the further improvement are based on main
following principles:
- Intercell prioritization on top of current intracell mechanisms.
- Bit rate decrease according to user priority when congestion
occurs.- Bit rate increase according to user priority when congestion
relieves.
- Configurable priority weight based on delay requirements and
user priority.
In summary, under the current trend for Broadbandapplications to go mobile and the consequent HSPA traffic
growth, use of advanced QoS Load differentiation mechanisms
in the mobile Networks becomes a significant opportunity for
better and innovative customer propositions. This paper
presents significant positive results of one of the first
application of such mechanisms together with some practicalconclusions as well as proposal for further improvements.
REFERENCES
[1] 3GPP TS 23.107 v5.13.0,: " QoS Concept and Architecture ".
[2] 3GPP TS 25.413: "UTRAN Iu interface RANAP signalling".[3] 3GPP TS 25.433: " UTRAN Iub interface Node B Application Part
(NBAP) signalling ".
[4] 3GPP TS 25.427: " UTRAN Iur/Iub interface user plane protocol forDCH data streams
[5] W. E. Leland, M. S. Taqqu, W. Willinger, and D. V.Wilson. On theSelf-Similar Nature of Ethernet Traffic (Extended Version). IEEE/ACMTransactions on Networking, Vol. 2, No. 1, 1994, pp 115.
[6] V. Paxson and S. Floyd, Wide area traffic: The failure of Poissonmodeling, IEEE/ACM TNET 3, 3, 226-244, June 1995
[7] R.G. Addie, M. Zukerman, T.D. Neame, Broadband traffic modeling:simple solutions to hard problems, IEEE Communications Magazine 36(8) (1998).[6]R.G. Addie, M. Zuckerman, and T.D. Neame, Modelingsuperposition of many sources generating self similar traffic, Proc.ICC'99, Vancouver, Canada, 387-391, June 1999
[8] R.G. Addie, M. Zuckerman, and T.D. Neame, Modeling superposition ofmany sources generating self similar traffic, Proc. ICC'99, Vancouver,Canada, 387-391, June 1999
Providing quality of service over a shared wireless link. Andrews, M.;Kumaran, K.; Ramanan, K.; Stolyar, A.; Whiting, P.; Vijayakumar, R.Communications Magazine, IEEE Volume 39, Issue 2, Feb 2001Page(s):150 154 Digital Object Identifier 10.1109/35.90064