infso-ict-216284 socrates d5.9 final report on … final report...final report on self-organisation...

135
SOCRATES D5.9 Page 1 (135) INFSO-ICT-216284 SOCRATES D5.9 Final Report on Self-Organisation and its Implications in Wireless Access Networks Contractual Date of Delivery to the CEC: 31.12.2010 Actual Date of Delivery to the CEC: 17.01.2010 Authors: Thomas Kürner (Editor) , Mehdi Amirijoo, Irina Balan, Hans van den Berg, Andreas Eisenblätter, Thomas Jansen, Ljupco Jorguseski, Remco Litjens, Ove Linnell, Andreas Lobinger, Michaela Neuland, Frank Phillipson, Lars Christoph Schmelz, Bart Sas, Neil Scully, Kathleen Spaey, Szymon Stefanski, John Turk, Ulrich Türke, Kristina Zetterberg Reviewers: Hans van den Berg, Chris Blondia, David Lister Participants: ATE – IBBT – TNO – EAB – TUBS – VOD – NSN-D – NSN-PL Workpackage: WP5 – Integration, demonstration and dissemination Estimated person months: 14 PM Security: PU Nature: R Version: 1.0 Total number of pages: 135 Abstract: This document summarises the outcome and achievements of the work on self-organisation and its implications in wireless access networks by the SOCRATES project. After starting with the motivation and identifying the main drivers to introduce self-organisation in cellular LTE networks the document elaborates on SOCRATES’ approach for developing SON solutions. Next, results achieved for a selected subset of individual SON functions covering self-configuration, self-optimisation, and self-healing considered in the project are provided. In order to manage simultaneous operations of SON functions the concept of SON coordination is described and applied to selected pairs of use cases. Finally, findings on the implications of the introduction of self-organisation in LTE networks are described. Keyword list: Network Planning Optimisation and Operations Process, Measurement-based Optimisation, Network Architecture, Self-Organisation

Upload: buianh

Post on 02-Apr-2018

219 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 1 (135)

INFSO-ICT-216284 SOCRATES

D5.9

Final Report on Self-Organisation and its Implications in Wireless Access Networks

Contractual Date of Delivery to the CEC: 31.12.2010

Actual Date of Delivery to the CEC: 17.01.2010

Authors: Thomas Kürner (Editor) , Mehdi Amirijoo, Irina Balan, Hans van den Berg, Andreas Eisenblätter, Thomas Jansen, Ljupco Jorguseski, Remco Litjens, Ove Linnell, Andreas Lobinger, Michaela Neuland, Frank Phillipson, Lars Christoph Schmelz, Bart Sas, Neil Scully, Kathleen Spaey, Szymon Stefanski, John Turk, Ulrich Türke, Kristina Zetterberg

Reviewers: Hans van den Berg, Chris Blondia, David Lister

Participants: ATE – IBBT – TNO – EAB – TUBS – VOD – NSN-D – NSN-PL

Workpackage: WP5 – Integration, demonstration and dissemination

Estimated person months: 14 PM

Security: PU

Nature: R

Version: 1.0

Total number of pages: 135

Abstract: This document summarises the outcome and achievements of the work on self-organisation and its implications in wireless access networks by the SOCRATES project. After starting with the motivation and identifying the main drivers to introduce self-organisation in cellular LTE networks the document elaborates on SOCRATES’ approach for developing SON solutions. Next, results achieved for a selected subset of individual SON functions covering self-configuration, self-optimisation, and self-healing considered in the project are provided. In order to manage simultaneous operations of SON functions the concept of SON coordination is described and applied to selected pairs of use cases. Finally, findings on the implications of the introduction of self-organisation in LTE networks are described.

Keyword list: Network Planning Optimisation and Operations Process, Measurement-based Optimisation, Network Architecture, Self-Organisation

Page 2: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 2 (135)

Page 3: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 3 (135)

Executive Summary Self-organisation is among the hottest topics in telecommunication network research and development, eagerly awaited by network operators. The three commonly distinguished domains of self-organisation are self-configuration, self-optimisation, and self-healing. The idea is to equip a network with the capability to automatically adapt its configuration / parameter settings to (changes in) the environment. The outcome is a self-organising network (SON). Self-organisation is believed to improve the efficiency of network operations, that is, to bring down the cost of operations while retaining or even improving network quality. SON functionalities are therefore receiving prominent attention in the standardisation of LTE and LTE Advanced by 3GPP [13] [36].

The SOCRATES project focuses on self-organisation in LTE radio networks. This document is the final deliverable of the project, which gives an introduction to the project and a survey of the results. The document also contains detailed explanations of the individual SON functions developed throughout the project as well as the findings from (computational) studies on the individual performance of SON functionalities as well as when executed in parallel. The project’s results clearly demonstrate significant performance gains from self-organisation, but also show that the scale of the gains varies among SON functions. SON will obviously not be able to bypass physical constraints imposed by the network design. Introducing SON will have a big impact on network operations, and operators will control the SON activities by means of operator-specific policies.

A total of twenty-four applications of self-organisation (use cases) were determined in such a way that each of the three domains was well represented [11]. The cases were taken from 3GPP documents, NGMN recommendations or SON-related activities by partners from the project consortium. SON functionalities for the following selected use cases have been developed:

Self-configuration: Automatic Generation of Initial Parameters for NE insertion

Self-optimisation: Admission Control Parameter Optimisation, Packet Scheduling, Handover Optimisation, Load Balancing, Interference Coordination, Home eNodeB

Self-healing: Cell Outage Management

These eight use cases have been extensively elaborated: control parameters and performance metrics were determined, the list of existing measurements and interfaces as well as reasonable extension collected, sensible operator policies as well as assessment criteria for the impact of SON were determined. In addition, the cross-sectional X-Map Estimation has been elaborated, which allows producing raster-like data (e.g., path-loss data) based on measurements from user equipment. This can save on drive-tests.

For each of the topics, specific algorithmic solutions have been designed, implemented, and extensively analysed in simulation studies. The simulation studies used small-scale artificial as well as large-scale realistic scenarios. The mobility models range from simple random waypoint models over correlated group movements to detailed in-car mobility models that include traffic lights, acceleration and slowing down. The GPS positioning for the creation of X-Maps is, among others, based on satellite visibility models for realistic urban environment.

SON functions will often operate concurrently. This may result in conflicts if SON functions aim at different goals when optimising control parameters (control parameter conflict), or if the modification of a parameter by one SON function impairs the operation of other SON functions (an observability dependency). Such conflicts may cause sub-optimal performance and user satisfaction. The avoidance and/or resolution of conflicts and dependencies are thus beneficial. A SON Coordinator framework has therefore been developed. The coordinator shall ensure that the individual SON functions jointly work towards the same goal, formulated by the operator’s high-level objectives. This can be achieved by effectively and appropriately harmonising policies and control actions. Moreover, the coordinator is responsible for detecting and correcting undesired network behaviour in response to SON activities (guard functionality).

Over the coming years, SON functionalities will be gradually introduced into LTE networks. This will affect network planning and operations and requires further standardisation. Both topics are covered in the document as well.

The present line of development produced noteworthy results. But with self-organisation about to be added to LTE radio networks at large scale, many challenges still relate to implementing a largely autonomous and truly dependable SON system into live networks. Moreover, three pressing examples of challenges beyond the immediate scope of the project are (i) SON needs to be further developed and understood w.r.t. to its impact on the customer, that is, service accessibility and service quality; (ii) operator policies need to be expressible and enforceable at that level, and (iii) the scope has to be enlarged from LTE alone to current multi-technology deployment and future heterogeneous network architecture.

Page 4: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 4 (135)

Page 5: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 5 (135)

Authors Partner Name Phone / Fax / E-mail

TNO Hans van den Berg Phone: +31 15 285 7031 Fax: +31 15 285 7375 E-mail: [email protected] Ljupco Jorguseski Phone: +31 15 285 7154 Fax: +31 15 285 7375 E-mail: [email protected] Remco Litjens Phone: +31 6 5191 6092 Fax: +31 15 285 7375 E-mail: [email protected] Frank Phillipson Phone: +31 15 285 72 32 Fax: +31 15 285 7375 E-mail: [email protected]

ATE Andreas Eisenblätter Phone: +49 30 609882222 Fax: +49 30 609882299 E-mail: [email protected] Ulrich Türke Phone: +49 30 609882226 Fax: +49 30 609882299 E-mail: [email protected]

EAB Kristina Zetterberg Phone: +46 10 7114854 Fax: +46 10 7114990 E-mail: [email protected] Ove Linnell Phone: : +46 10 7115136 Fax: +46 10 7114990 E-mail: [email protected] Mehdi Amirijoo Phone: +46 10 7115290 Fax: +46 10 7114990 E-mail: [email protected] TUBS Thomas Kürner Phone: +49 (531) 391 2416 Fax: +49 (531) 391 5192 E-mail: [email protected] Michaela Neuland Phone: +49 (0)531 391 2411 Fax: +49 (0)531 391 5192 E-mail: [email protected] Thomas Jansen Phone: +49 (0)531 391 2486 Fax: +49 (0)531 391 5192 E-mail: [email protected]

Page 6: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 6 (135)

IBBT Kathleen Spaey Phone: +32 3 265.38.80. Fax: +32 3 265.37.77. E-mail: [email protected] Bart Sas Phone: +32 3 265.38.80. Fax: +32 3 265.37.77. E-mail: [email protected] Irina Balan Phone: +32 (0) 9 33 14975 Fax: +32 (0) 9 33 14899 E-mail: [email protected]

VOD Neil Scully Phone: +44 1635 682380 Fax: +44 1635 676147 E-mail: [email protected] John Turk Phone: +44 1635 676254 Fax: +44 1635 676147 E-mail: [email protected]

NSN-D Lars Christoph Schmelz Phone: +49 89 5159-29585 Fax: +49 89 5159-44-29585 E-mail: [email protected] Andreas Lobinger Phone: +49 89 5159-21920 Fax: +49 89 5159-44-21920 E-mail: [email protected]

NSN-PL Szymon Stefanski Phone: +48 728 361 372 Fax: +48 71 777 3873 E-mail: [email protected]

Page 7: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 7 (135)

List of Acronyms and Abbreviations 2G 2nd Generation 3G 3rd Generation 3GPP 3rd Generation Partnership Project AGP Automatic Generation of Default Parameters ARP Admission and Rentention Priority BSC Base Station Controller BTS Base Transceiver Station CDR Call Drop Ratio CIO Cell Individual Offset COC Cell Outage Compensation COD Cell Outage Detection COM Cell Outage Management CAPEX Capital Expenditure CSG Closed Subscriber Group eNB / eNodeB LTE NodeB (Radio Base Station) E-UTRAN Evolved UMTS Terrestrial Radio Access Network DM Domain Manager EM Element Manager FTP File Transfer Protocol GoS Grade of Service GPS Global Positioning System GSM Global System of Mobile Communications HeNB Home eNodeB HetNets Heterogeneous Networks HO Handover HPI Handover Performance Indicator HPO Handover Performance Optimisation HOoff Handover Offset Hys Hysteresis HSBHOA HPI-Sum based Handover Optimisation Algorithm ICO Interference Coordination IRAT Inter Radio Access Technology IPTV Internet Protocol Television KPI Key Performance Indicator LB Load Balancing LTE 3rd Generation Long Term Evolution MCS Modulation and Coding Scheme MSC Mobile Switching Center NE Network Elements NGMN Next Generation Mobile Networks NM Network Manager OFDMA Orthogonal Frequency Division Multiple Access OMC Operations and Maintenance Center OPEX Operational Expenditure OTDOA Observed Time Difference of Arrival O&M Operations and Maintenance OTT Over-the-Top QoS Quality of Service PPHR Ping-pong Hanover Ratio PRB Physical Resource Block RACH Random Access Channel RB Resource Block RLF Radio Link Failure RRM Radio Resource Management RMU RAN Measurement unit RSRP Reference Signal Received Power RSRQ Reference Signal Received Quality

Page 8: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 8 (135)

RSSI Received Signal Strength Indication SeNB Source eNodeB SINR Signal-to-Noise and Interference Ratio SON Self-Organising Network SO-HO Self-Organised Handover Organisation TeNB Target eNodeB TBHOA Trend-based Handover Algorithm TTT Time-to-Trigger RAN Radio Access Network RAT Radio Access Technology RET Remote Electrical Tilt UE User Equipment UMTS Universal Mobile Telecommunications System UTRAN UMTS Terrestrial Radio Access Network

Page 9: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 9 (135)

Table of Contents

1  Introduction ............................................................................................... 11 

1.1  Drivers for Self-Organisation in Wireless Access Networks ..................................................... 11 1.2  Setting SON into the Context of Mobile Network Operations ................................................. 13 1.3  Organisation of the Document ................................................................................................... 16 

2  SOCRATES Approach for Developing SON Solutions .......................... 17 

2.1  The SOCRATES project ........................................................................................................... 17 2.2  SON requirements ..................................................................................................................... 18 2.3  SON assessment approach ......................................................................................................... 20 2.4  SON development guidelines .................................................................................................... 22 

3  Individual SON Functions ........................................................................ 25 

3.1  Use Cases – Situation in Today’s Networks .............................................................................. 25 3.2  SON Solutions Developed in SOCRATES ............................................................................... 27 3.3  Conclusions on Individual SON Functions ............................................................................... 34 

4  Simultaneous Operation of SON Functions ........................................... 37 

4.1  Integrated Use Cases ................................................................................................................. 37 4.2  SON Coordination ..................................................................................................................... 40 4.3  Discussion and Open Issues ...................................................................................................... 46 4.4  Conclusions on Integrated SON Functions ................................................................................ 47 

5  Implications of the Introduction of SON ................................................. 48 

5.1  Impact on Radio Network Planning and Operations ................................................................. 48 5.2  Impact on OPEX and CAPEX ................................................................................................... 51 5.3  Impact on Network Architecture ............................................................................................... 53 5.4  Impact on Standardisation ......................................................................................................... 55 5.5  Impact on Regulation ................................................................................................................ 55 5.6  Conclusions on Implications ..................................................................................................... 55 

6  Conclusions .............................................................................................. 57 

7  Future Work ............................................................................................... 60 

8  Detailed Description of Individual SON Functions ................................. 63 

8.1  Admission Control Parameter Optimisation .............................................................................. 63 8.2  Packet Scheduling Parameter Optimisation ............................................................................... 69 8.3  Handover Parameter Optimisation ............................................................................................ 74 8.4  Load Balancing .......................................................................................................................... 83 8.5  Self-Optimisation of Home eNodeBs ........................................................................................ 90 8.6  Cell Outage Management .......................................................................................................... 98 8.7  X-Map Estimation ................................................................................................................... 103 8.8  Automatic Generation of Initial Parameters for eNodeB Insertion ......................................... 109 

9  Detailed Description of Integrated SON Functions .............................. 115 

9.1  Handover Parameter Optimisation and Load Balancing ......................................................... 115 9.2  Macro and Home eNodeB Handover Parameter Optimisation ................................................ 121 9.3  Admission Control and Handover Parameter Optimisation .................................................... 127 

References .................................................................................................... 133 

Page 10: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 10 (135)

Page 11: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 11 (135)

1 Introduction Self-Organisation, which is discussed in telecommunications and computer networks already for a couple of years, has become one of the hot topics in wireless networks. The introduction of SON in wireless networks is driven by the idea to enhance the network automatically adapting its configuration and parameter settings to (changes in) the environment based on inherently in the system available measurements. This is a feature that is awaited eagerly by the network operators and promoted through the operators’ organisation NGMN (Next Generation Mobile Networks). The FP7 project ICT-SOCRATES has taken up the task to study concepts, methods and algorithms for self-organisation (SON) of wireless access networks focussing on LTE (Long Term Evolution) radio networks. The three domains of self-organisation are self-configuration, self-optimisation, and self-healing. All three domains have been addressed in SOCRATES in various use cases selected from twenty-five use cases defined by 3GPP and NGMN. This document is the publicly available final report of the project. It summarises the main project results and elaborates on the implications of SON in wireless access networks. The first chapter of this document starts with the discussion of the motivation to develop and implement self-organisation features to cellular wireless networks, sets SON into the context of wireless access network engineering, and informs the reader on the structure of the remaining document.

1.1 Drivers for Self-Organisation in Wireless Access Networks

The appearance of SON methods in cellular wireless networks can be explained both from a technical driven and a market driven perspective. The technical driven view to come to SON considers SON as a quite “natural” evolution of the degree of automatisation in cellular networks. The more market driven perspective starts from “hard” economic and operational facts and represents more the view of an operator. The first view is presented in sub-section 1.1.1, whereas sub-section 1.1.2 deals with the second view and lists the main drivers for the introduction of SON.

1.1.1 Evolution of the Degree of Automatisation in Cellular Networks – A Technical Driven Perspective

With the introduction of second generation mobile systems like GSM, numerous radio planning tools have been developed in the early 1990s, which assisted the mobile operators in planning their networks. At those days most networks consisted of several hundreds (or several thousands like in Germany) base stations. The only service to be considered in radio planning was speech and as a consequence complexity of the networks was low compared to the current situation. The most important task to be handled at those days was coverage prediction, see, e. g., [1]. Subsequently, the increasing numbers of subscribers led to higher demands for capacity requirements, leading to the fact that capacity became an additional target of radio planning. At that time sophisticated frequency planning algorithms appeared as a first step towards automatic radio planning [2]. The feedback of measurement data into the planning process was merely restricted to received power levels for the calibration of prediction models.

The introduction of UMTS increased the number of services and the complexity in such a way that automatic planning methods became indispensable in radio planning tools [3][4]. However, the algorithms and tools are still based on simulations using network data in conjunction with a geographic database. All radio and network configuration parameter settings are determined and optimised based on these algorithms and tools, and finally implemented in the network. However, subsequent network optimisation steps based on measurements are typically required. Either measurements or the changed parameter settings or both have to be fed back into the planning process, see Figure 1. Note, that network optimisation tasks are frequently split among network planning and operations, which at many operators belong also to different departments in the organisation.

Page 12: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 12 (135)

Figure 1: Typical planning and operation loop of a cellular GSM radio network [5]

In the whole network planning, roll-out, optimisation and operations feedback loop various tools, interfaces, databases and even different departments within the organisation of the network operator are involved. This makes the implementation of a proper feedback loop a non-trivial task.

In addition some planning and optimisation tasks, involving traffic and mobility modelling for example, require also measurements of performance statistics from the Operation and Maintenance Centre (OMC). Other optimisation tasks, like interference minimisation could certainly benefit from access to available measurements in the network. Such ideas led to the concept of measurement-based optimisation, which is currently subject of a dedicated sub-working group in COST Action 2100 [6]. Still solutions to guarantee a consistent set of data in the whole planning and optimisation loop are required [5][7].

The development of self-organising methods in future cellular systems like LTE has the potential to overcome some of the drawbacks like data inconsistency and sub-optimal use or even limited access of measurement data observed in the current systems by integrating the whole feedback-loop directly into the system. This will avoid the involvement of too many different interfaces, databases and tools. Key achievements by introducing self-organising methods are performance increase and the reduction of human involvement, which is always a source of inconsistency.

1.1.2 Main Drivers for the Introduction of SON – A Market Driven Perspective

Whereas the previous view on SON is an evolutionary one describing the development of planning, optimisation and operations of radio networks, the view in the current sub-section is derived from the operational needs of the network operators. There are four main drivers demanding the introduction of SON techniques: (i) Mastering complexity, (ii) effectuating substantial OPEX reduction, (iii) optimise network efficiency and service quality and (iv) enhance robustness/resilience in case of failures.

The need to master complexity originates from the appearance of new advanced technologies. With GSM, UMTS and LTE soon three different generations of radio access technologies (RAT) all of them interoperating will co-exist at most network operators. In addition, in each of the single RATs hundreds of parameters have to be configured making it impossible to find optimum settings by manual or semi-manual adjustment. With the increased complexity even suboptimal settings require increasing operational effort. The introduction of femto cells brings in a new dimension to network maintenance. The installation of femto cells is completely out of the control of the operator, although femto cells have a strong interaction via interference with the remaining network. Hence, the introduction of femto cells turns the availability of SON into a pre-requisite.

Due to the establishment of flat rates and the increasing traffic demands and data rates, operators are urged to effectuate substantial OPEX reductions. This is mainly since the revenue increase does not follow the increase of the total data traffic [12]. This effect is also known as the “too much data paradox”. The only possibility to stay profitable is “to produce the bits/s “ at much lower cost, i.e., to reduce OPEX substantially. To do so the minimisation of human involvement is one means, where SON again is a pre-requisite.

The increase in data traffic brings up the necessity to optimise network efficiency and service quality. In the light of the limited spectrum resources and with the boundary condition to minimise CAPEX, the only chance to increase network capacity and still maintaining or even improving service quality is to

Page 13: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 13 (135)

apply advanced optimisation techniques. Such techniques critically rely on the availability of input data. A vast amount of yet unexplored measurement data inherently present in the system, collected in the OMC e. g., can be made available as input data to the optimisation process. Heterogeneous planning, optimisation and configuration tools frequently spread over also over different organizational units at the operators increase the complexity of the process to feed these data back to the optimisation algorithms. SON seems to be an excellent opportunity to enable this feedback in a smart way.

System failures and misconfiguration are a source for unsatisfied customers, loss of turnover and additional OPEX to fix the problems. Therefore enhancing robustness and resilience in case of failures is another item, where SON is a clear enabler. These tasks can be addressed by applying self-configuration and self-healing concepts. For example, in self-healing the surrounding base stations of a site in failure can be automatically reconfigured in order to carry at least part of the traffic of this site. This will keep both the number of unsatisfied users and the loss of turnover at a lower level.

The need for the introduction of SON has already been articulated by the Next Generation Mobile Networks (NGMN) alliance in May 2007 [13] and December 2008 [14], where use cases have been defined. The purpose of these documents was to reflect a common understanding of NGMN operators towards SON and the required use cases.

1.2 Setting SON into the Context of Mobile Network Operations

Self-organisation of mobile access networks finds its application in various engineering stages of mobile network operations, namely the planning, deployment, optimisation and maintenance stages, as indicated in Figure 2.

Figure 2: Self-organisation affects all aspects of mobile network operations

This section briefly describes how SON can specifically assist in these engineering stages and concludes with a more general and unified view of SON in the context of mobile network operations.

1.2.1 Planning and Deployment

Planning and deployment can be done more efficiently by the introduction self-configuration features. Therefore different self-configuration features can either assist in the network planning process by contributing valuable input information, or automate configuration of cells, sites and backhaul links. One example of the former is the automated measurement-based tuning of propagation, traffic or user distribution models, while another advanced SON function is the intelligent selection of new site locations. Based on automated performance (coverage, accessibility, service quality) measurements the foreseen future SON system is not only able to optimise radio parameters to best serve offered traffic, but also to recognise when the optimisation potential has been fully reached and the addition of new hardware (sites, antennas, channel elements) is required to handle the offered traffic in accordance with the operator-specified performance targets. By accompanying performance measurements with some degree of location information, an intelligent suggestion is made on what type of additional hardware needs to be deployed and where (new site deployment) or at what site (addition of antennas, channel elements, …) this is most effective. Another related use case where SON will contribute is the initial configuration of base station parameters, including, e.g., tilt and transmit powers. Upon deployment of a new site, the SON system can determine the initial settings of such parameters as well as govern the soft integration of new elements in the existing network, to ensure smooth introduction and prevent disruptive effects.

Besides these examples, within SOCRATES and NGMN, other SON use cases have been defined in the planning/deployment category, including automated installation of hard- and software, self-testing, automatic network authentication and transport network parameter setup.

PLANNING DEPLOYMENT OPTIMISATION MAINTENANCE

SELF-ORGANISATION

Page 14: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 14 (135)

1.2.2 Optimisation

A significant degree of SON attention is drawn to network optimisation (regarding service quality and network efficiency) mostly closely related to radio resource management. Self-optimisation typically places an automatic optimisation layer on top of ‘fixed’ radio parameters traditionally tuned manually on a regular basis or based on specific triggers. These parameters may be fundamental radio parameters such as antenna tilt or azimuth, or they may be parameters that specify an existing RRM mechanism. Examples of the latter type are the self-optimisation of admission/congestion/handover control parameters or the optimisation of packet scheduling. Two important mechanisms that are generally considered self-optimisation (but may also be categorised as new radio resource management mechanisms) are inter-cell interference coordination and load balancing. In an advanced form, the former effectively extends the packet scheduling and power control functions from the intra- to the inter-cell domain. In the inter-cell domain decisions regarding which user is assigned which resource blocks (as well as the corresponding transmit powers) is then coordinated among neighbour cells instead of making decisions only within one cell (intra-cell domain)

Load balancing can be implemented in different ways, either by means of directed handovers of sessions from one cell to another (~ radio resource management) or by adapting handover control parameters and hence indirectly moving traffic between cells (~ self-optimisation).

1.2.3 Maintenance

In terms of enhancing network maintenance, SON contributes by, e.g., increasing the ease of hard- and software upgrades, by automatically monitoring the proper working of network elements and by proving failure recovery and/or compensation mechanisms. An interesting use case in this context is the self-healing mechanism of cell outage compensation. Upon detecting the outage of a site or cell (a valuable SON mechanism in itself), cell outage compensation automatically adjusts radio parameters in neighbouring cells in order to mitigate the outage-induced coverage and performance loss. This way, the effects on the customer experience is minimised in the period until the underlying hard/software problems are (possibly manually) resolved.

In the above discussion, SON has been considered in relation to the different stages of network operations, viz. planning, deployment, optimisation and maintenance. In the next subsection, we describe a unified view on the future role and working of the three key elements of SON: self-configuration, self-optimisation and self-healing.

1.2.4 Unified View

In traditional network operation the various engineering stages described in the previous section are largely treated as tasks operated sequentially. At many operators they are handled as more or less isolated tasks, where frequently responsibilities are split among different organisations (see Section 1.1.1). With the introduction of SON these tasks will be much more interrelated. For the description of this interrelation this section provides a unified view on SON.

Future networks will require minimal human involvement in the network planning and optimisation tasks. Newly added base stations are self-configured in a ‘plug-and-play’ fashion, while existing base stations continuously self-optimise their operational algorithms and parameters in response to changes in network, traffic and environmental conditions. The adaptations are performed in order to provide the targeted service availability and quality as efficiently as possible. In the event of a cell or site failure, self-healing methods are triggered to resolve the resulting coverage/capacity gap to the extent possible. A unified picture of these distinct components of self-organisation is given in Figure 3.

Page 15: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 15 (135)

Figure 3: Self-optimisation, -configuration and -healing processes in future mobile networks

Consider a fully configured and operational radio access network and, somewhat arbitrarily, start at the depicted ‘measurements’ phase. This phase indicates a continuous activity where a multitude of measurements are collected via various sources, including network counters and probes. These raw measurements of, e.g., radio channel characteristics, traffic and user mobility aspects, are processed in order to extract relevant information for the various related self-optimisation tasks. The required format, accuracy and periodicity of the delivered information depend on the specific mechanism that is to be self-optimised. In the ‘self-optimisation’ phase intelligent methods apply the processed measurements to derive an updated set of radio (resource management) parameters, including, e.g., antenna parameters (tilt, azimuth), power settings (incl. pilot, control and traffic channels), neighbour lists (cell IDs and associated weights), and a range of radio resource management parameters (admission/congestion/handover control and packet scheduling). In case the self-optimisation methods appear to be incapable to meet the performance objectives, capacity expansion is indispensable and timely triggers with accompanying suggestions for human intervention are delivered, e.g., in terms of a recommended location for a new site.

The ‘self-configuration’ phase, depicted as an external arm reaching into the continuous self-optimisation cycle, is triggered by ‘incidental events’ of an ‘intentional nature’. Examples are the addition of a new site and the introduction of a service or a new network feature. These upgrades generally require an initial (re)configuration of a number of radio parameters or resource management algorithms, e.g., pilot powers and neighbour lists. These have to be set prior to operations and before they can be optimised as part of the continuous self-optimisation process. Triggered by ‘incidental events’ of a ‘non-intentional nature’, such as the failure of a cell or site, ‘self-healing’ methods aim to resolve the loss of coverage/capacity induced by such events to the extent possible. This is done by appropriately adjusting the parameters and algorithms in surrounding cells. Once the actual failure has been repaired, all parameters are restored to their original settings.

The degree of self-organisation that is deployed determines the residual tasks that remain for network operators. In an ideal case, the operator merely needs to feed the self-organisation methods with a number of policy aspects, e.g., its desired balance in the apparent trade-offs that exist between the conflicting coverage, capacity, quality and cost targets. The self-organisation methods then feed the operator with (i) timely triggers for capacity expansion in the form of new sites, intelligently suggesting a good location, or other hardware issues, e.g., new channel boards, a more powerful amplifier, a change in mechanical tilt (note that electrical tilt can be done automatically); and (ii) immediate alarms in case of network element failures. Until an ideal setting is achieved, we foresee a gradual introduction of self-organisation in radio access networks, characterised by different incremental upgrades which are implemented and monitored. This way, the implemented measures can be adequately assessed; the impact of potential ‘teething troubles’ can be limited and the operators’ confidence to hand over its control to automated algorithms increases.

Page 16: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 16 (135)

1.3 Organisation of the Document

The document consists of nine chapters and is split practically into two parts. Chapters 1 to 7 cover the first part of the document intending to provide the reader with an overview on the major principles, approaches and methodologies applied as well as the main results achieved in SOCRATES including brief descriptions of the use cases. For those readers interested in more detailed results achieved in the various use cases the second part of the document consisting of Chapters 8 and 9 provide an extended abstract for each of these use cases.

The detailed organisation of the document is as follows: After the overall introduction to self-organisation in mobile access networks in the current chapter, the document elaborates on SOCRATES’ approach for developing SON solutions in Chapter 2. In Chapter 3 results achieved for a selected subset of individual SON functions covering self-configuration, self-optimisation and self-healing considered in the project are provided. Simultaneous operations of SON functions are covered in Chapter 4, whereas the findings on the implications of the introduction of self-organisation in LTE Networks are described in Chapter 5. In Chapters 6 and 7, respectively, conclusions are drawn and an outlook to future work is given. Detailed descriptions on individual SON functions are presented in Chapter 8, followed by the description on integrated SON functions in Chapter 9.

Page 17: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 17 (135)

2 SOCRATES Approach for Developing SON Solutions In this chapter, we describe the approach followed in the SOCRATES project to the development of SON solutions. First, in Section 2.1, a high level overview of the work done in the project is given. Next, Section 2.2 addresses the design requirements for self-organisation methods and algorithms, and Section 2.3 discusses the quantitative assessment and comparison of SON solutions. Finally, in Section 2.4, concrete, step-wise guidelines for the development of SON solutions are given.

2.1 The SOCRATES project

The main goal of the SOCRATES project is the development, evaluation and demonstration of concepts, methods and algorithms for self-configuration, self-optimisation and self-healing in LTE networks. The SOCRATES consortium is well suited to achieve this goal. It consists of seven partners from six European countries, see Figure 4, well balanced among industry and academia. On the industry side Ericsson (Sweden) and Nokia Siemens Networks (Germany, Poland), as infrastructure manufacturers, and Vodafone (UK), as mobile network operator, were involved. They have been complemented by the SME atesio (Germany), which brought in its expertise on algorithms for planning and optimisation of large-scale radio networks. Representatives of the academic site and research institutes are TNO (The Netherlands), IBBT (Belgium) - participating through its member universities in Antwerp and Ghent - and TU Braunschweig (Germany).

Figure 4: Partners in the SOCRATES project

The project was split up in five work packages (WPs), see Figure 5. Besides the project management (WP1) these work packages are “Use cases, requirements and framework for self-organisation” (WP2), “Development of methods for self-optimisation” (WP3), “Development of methods for self-configuration and self-healing” (WP4), and “Integration, demonstration and dissemination” (WP5).

As a starting point of the project activities, in WP2, self-organisation use cases defined by NGMN and 3GPP were taken. These use cases were described in further detail and complemented by a number of additional self-organisation use cases, see deliverable D2.1 [11]. Next the requirements for self-organisation solutions were analysed in detail (both from a technical and business point of view), and criteria and methodologies to assess the solutions for self-organisation were described, together with the reference scenarios to be used for assessing the developed solutions. In addition, initial investigations were made regarding the network architectural implications. All together, this work led to a framework for the development of self-organisation methods and algorithms. The requirements for self-organisation solutions and assessment approach will be briefly addressed in Section 2.2 and Section 2.3, respectively; for details we refer to the deliverables D2.2 [20] and D2.3 [49].

Page 18: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 18 (135)

Figure 5: Schematic view of the overall SOCRATES project structure and organisation of work

The next phase in the project, mainly covered by WP3 and WP4, comprised the actual development of methods and algorithms for a selection of the use cases defined in WP2. For quantitative investigations of these use cases, e.g., validation, assessment and tuning of the self-organisation methods and algorithms, dedicated simulation tools developed in the project were used. Initially, individual (‘stand-alone’) use cases were considered; at a later stage, building on the obtained results and experiences, also integrated use cases (with multiple interacting SON functionalities) were studied. Related to the latter a general framework for the coordination of multiple self-organised functionalities has been developed (“SON Coordinator”). The main results of the studies in WP3 and WP4 are reported in Chapters 3, 4, 8 and 9 of the present deliverable; details are provided in internal deliverables. In the last phase of the project, mainly covered by WP5, results obtained in WP3 and WP4 were further integrated, simulation tools were ‘upgraded’ for demonstration of the developed self-organisation methods, and the implications of self-organisation on network planning and operations were investigated. These implications are addressed in Chapter 5 of the present deliverable.

During the three years of the project, within WP5 also coordinated the exchange of information with related activities in 3GPP and NGMN, and took care of the dissemination of the project results. An overview of SOCRATES’ impact on standardisation and its dissemination activities, including the demonstrations, can be found in deliverable D5.8 [52].

2.2 SON requirements

In this section the main requirements on solutions for self-organisation are briefly addressed. These requirements give guidance to the development of self-organisation methods and algorithms. We distinguish between technical requirements and business requirements The technical requirements are categorised according to criteria like performance and complexity, stability, robustness, time scale of operation, interaction with other functionalities, architecture and scalability, and the required inputs (counters, measurements). The business requirements include requirements related to cost efficiency, and requirements for ensuring optimal benefit when applying solutions for self-organisation during the deployment of LTE networks.

Here we restrict ourselves to a brief overview of the requirements for self-organisation solutions. For more details and further discussions we refer to SOCRATES deliverable D2.2 [20].

2.2.1 Technical requirements

For each use case considered in the SOCRATES project a number of technical requirements have been specified in detail, see Chapter 2 and Appendix A of deliverable D2.2 [20].

On a more general level these technical requirements are as follows:

Performance and complexity requirements: For all use cases (SON functions), key requirement is that an appropriate balance should exist between the network performance gains established by adding self-optimisation, self-configuration or self-healing, and the implementation

WP 1Projectmgmt

(TNO)

WP 2Use cases, requirements, assessment criteria and framework

(VOD)

WP 3Self-optimisation

(EAB)

WP 4Self-configurationand self-healing

(NSN-D)

WP 5Integration, demonstration and dissemination

(IBBT)

Req

uir

em

ents

ph

ase

Dev

elo

pm

en

tp

has

eIn

teg

rati

on

ph

ase

WP 1Projectmgmt

(TNO)

WP 2Use cases, requirements, assessment criteria and framework

(VOD)

WP 3Self-optimisation

(EAB)

WP 4Self-configurationand self-healing

(NSN-D)

WP 5Integration, demonstration and dissemination

(IBBT)

Req

uir

em

ents

ph

ase

Dev

elo

pm

en

tp

has

eIn

teg

rati

on

ph

ase

Page 19: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 19 (135)

complexity, which is a clear trade-off. Implementation complexity can thereby be measured at least with the following indicators:

o Signalling overhead caused by self-organisation algorithms and required measurements

o Calculation effort for algorithms, required computing power and memory o Storage requirements o Load on the radio link, especially for user equipment measurements

Stability requirements: For all SON functions, it is required that they converge to a stable state, or solution, respectively, within the given timing requirements.

Robustness requirements: Missing, wrong or corrupted input (measurements) should be detected and corrected (or omitted) by the algorithms so they still provide satisfying output parameters.

Timing requirements: Per use case two different timing requirements should be considered:

o Time scale of operation, i.e., the time frame that is regarded for the analysis of the measurements.

o Speed of adjustment: required time scale for an algorithm to converge to a solution with a new parameter set after having been triggered

No general timing requirements can be given. The use cases have individual requirements, e.g., regarding the time scale of operation the timing requirements may range from milliseconds to seconds, hours or even days.

Interaction requirements: Especially for self-optimisation, it is necessary to establish coordination between those SON functions that modify the same parameter settings as output of the algorithms or base parameter adjustments on the same ‘observables’ (e.g., particular performance parameters).

Architectural and scalability requirements: Will the algorithms be centralised or decentralised, and what requirements will that bring in terms of architecture, interfaces, and scalability? E.g., for some use cases (usually with a scope of several network elements or the entire network), centralised monitoring, data storage, and data analysis entities are required; for other use cases (usually with a scope of single or few network elements), appropriate computing power, memory and storage capacity at the network elements are required.

Required inputs (performance counters and measurements). The requirements regarding the inputs for the algorithms (e.g., traffic and performance measurements) are, obviously, strongly use case dependent.

2.2.2 Business requirements

In addition to considering the technical requirements on the introduction of self-organisation, it is also important to consider the business requirements. The business requirements can be divided roughly into two categories: cost efficiency and network deployment. Both are briefly discussed below. For more details we refer to Chapter 3 of SOCRATES deliverable D2.2 [20].

Cost efficiency Regarding cost efficiency the following requirements for self-organisation solutions are obviously the most important ones to be considered:

OPEX reduction: As one of the main drivers for the introduction of SON, self-organisation solutions should significantly contribute to reduction of manual effort in the roll out and operations of a network.

CAPEX reduction: By more efficiently using radio resources, for example fewer base stations may be required.

It is expected that OPEX reductions will be more significant than CAPEX reductions. Moreover, adding SON functionality will potentially increase the cost of equipment, due to the need for additional hardware and/or software. Therefore, SON impact on costs of all parts of the mobile network should be considered, including terminals (functionality to support SON could increase hardware or software costs, and potentially decrease battery life), base stations (additional hardware or software could increase costs) and O&M systems (additional hardware or software could increase costs).

Network deployment The introduction of self-organisation should also improve (or at least not degrade) network and service deployment. The resulting requirements in this category are often closely related to OPEX reduction. For example, requirements regarding the following deployment aspects should be considered:

Page 20: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 20 (135)

Speed up of the roll-out of networks: When a new network is first rolled out, there are usually many problems to solve before satisfactory performance is achieved. SON solutions should reduce these problems, and reduce the time required before cells go live.

Easy deployment of new services: New services may, e.g., have new QoS requirements, and SON solutions should be able to support these, with minimum network (re-)configuration required.

End user benefits: Users should experience (sufficiently) high Grade-of-Service and Quality-of-Service.

Availability of SON functionality should also match with the requirements of the stage of roll-out. In the initial stage, the networks will have a low load. At that point, easy deployment of the network is the most important. As the load on the network increases, it becomes more important to have functionality that efficiently handles high load. Certain deployment trends should also be considered. For example, network sharing (between different operators) is becoming increasingly common, and this should be taken into account when developing SON solutions.

2.3 SON assessment approach

This section focuses on the assessment and comparison of self-organisation solutions. We will first briefly address the metrics to be used in the evaluation and comparison of self-organisation algorithms. Next, the use of benchmarking as an assessment approach for self-organisation solutions is discussed. For a more detailed treatment of the assessment of SON solutions we refer to SOCRATES deliverable D2.3 [49].

2.3.1 Metrics

Three categories of metrics are considered that aid in the evaluation and comparison of self-organisation algorithms:

Network performance metrics – This includes metrics quantifying GoS/QoS (e.g. call dropping ratio, packet loss ratio, throughput, fairness), coverage and capacity.

Business level metrics – High level models for determining cost- and revenue-related metrics have been developed, in particular for CAPEX, OPEX and revenue. The reason why besides models for OPEX and CAPEX also a model for determining revenue is needed, is to assess the monetary advantage of ‘otherwise missed value’ when applying e.g. self-healing in order to enhance service availability in case of cell outages.

Other metrics – This includes e.g. metrics expressing stability, complexity and algorithm convergence time. Most of these metrics were also considered as technical requirements for SON solutions, see the previous section. They are subordinate to the network performance metrics in the sense that e.g., slow algorithm convergence will have a negative impact on the GoS/QoS, but an important part of their assessment will be whether the technical requirements are met by the developed algorithms.

Elaborations and discussions on these metrics used for assessment of self-organisation solutions are provided in Section 2.1 of deliverable D2.3 [49].

2.3.2 Benchmarking

To enable an actual assessment and comparison of SON solutions, the evaluation metrics addressed in the previous subsection should be estimated for situations with and without SON. An approach to such benchmarking is described in Section 2.2 of deliverable D2.3 [49]. Below we will describe this approach from the perspective of a self-optimisation use case.

Starting point is a specific situation in terms of, e.g., the propagation environment, service mix, traffic characteristics, spatial traffic distribution. However, these characteristics may fluctuate over time leading to algorithm-specific radio parameter adjustments. For such a scenario, the achievements of the different developed self-optimisation algorithms comprise measures of, e.g., performance, complexity and CAPEX, and are further characterised by the estimated residual OPEX, reflecting the operational effort still required after the self-optimisation solution is in place. Regarding the latter, it is noted that, typically, a more advanced self-optimisation algorithm is likely to require less human involvement, whereas a relatively simple self-optimisation algorithm may require e.g. regular tuning, re-parameterisation or performance verification by operational experts. Regarding the achievable performance and the required CAPEX, it is noted that a performance enhancement brought about by a self-optimisation algorithm may be exploited in terms of a reduced investment of network resources (e.g. sites, spectrum, channel elements), i.e. a reduced CAPEX. In that sense, performance and CAPEX are ‘interchangeable gains’.

Page 21: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 21 (135)

In Figure 6, example values of the performance metrics (e.g., obtained through simulations) are shown. Observe that, e.g., self-optimisation algorithm SOA achieves the highest performance, which can be exploited to achieve the lowest CAPEX, but in order to achieve this it requires a lot of measurements (~ complexity). In contrast, algorithm SOD is significantly less complex, but consequently achieves worse performance and CAPEX.

Figure 6: Example values of performance metrics for different self-optimisation methods

In general, it is hard to compare SOA through SOD given the conflicting performance objectives, e.g., SOA outperforms SOD in terms of CAPEX, but SOD outperforms SOA in terms of complexity. One approach to enforce a strict overall ranking is to weigh/combine the different measures into some utility function and rank the algorithms based on the obtained utility values. Another approach is to select one measure as single target measure, place constraints on the other measures and rank only those algorithms that meet the constraints on the other measures based on the target measure.

Whereas the above discussion outlines an approach to compare different self-optimisation algorithms, a more difficult challenge is to compare a self-optimisation algorithm with a case of manual optimisation. For such a comparison one needs to specify the manual optimisation method, in order to derive estimates for the different performance, capacity, etc., measures.

In an extreme case, we could assume that a ‘manual operator’ freezes its radio parameters once and for all. In that case it would make sense to do an off-line optimisation of the parameter set such that the overall performance is optimised, considering the given scenario with varying traffic, mobility, and/or propagation characteristics.

In practice, however, a network operator will monitor such characteristics as well as the achieved performance level, and occasionally reset the radio parameters. Depending on the operator’s policy this may happen more or less frequently: a quality-oriented operator is likely to do more frequent adjustments than a cost-oriented operator. In order to model this in a reasonable way, we propose to define ‘manual optimisation algorithms’ MOA through MOD (continuing the above example) such that they manually adjust radio parameters at the same time and to the same values1 as the corresponding self-optimisation algorithms with the same label. An example comparison of SOA with MOD is visualised in Figure 7, concentrating on (for example) CAPEX and OPEX-related measures. In this figure, the residual OPEX and required CAPEX for solution SOA, as well as the required CAPEX associated with approach MOD, are repeated from Figure 6, while the example in the figure further introduces the required OPEX2 associated with manual optimisation approach MOD.

1 The assumption that the manual optimisation algorithm makes the same adjustments at the same time is easily

relaxed in an actual quantitative study, e.g., by introducing randomised discrepancies in the (timing of) parameter adjustments.

2 Such an OPEX value is effectively a non-trivial conversion of the estimated effort involved in e.g. radio parameter adjustments into a monetary value. In this light, we note that it may be wise to distinguish between different types of parameter adjustments that involve different degrees of manual effort.

SOA

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOB

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOC

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SODSOA

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOA

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

XGo

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOB

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOB

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOC

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOC

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOD

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

Go

S/Q

oS

CO

MP

LE

XIT

Y

OP

EX

CA

PE

X

SOD

Page 22: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 22 (135)

Figure 7: Comparison of SOA with regard to benchmark MOD

Figure 7 indicates the OPEX and CAPEX gains that can be achieved if an operator which traditionally optimised its network according to approach MOD introduces self-optimisation solution SOA. Assuming that the CAPEX gains include any potential ‘monetarised’ performance gains, as discussed above, the example should be interpreted such that the considered network operator, by switching from manual to self-optimisation can achieve the indicated OPEX and CAPEX gains, while delivering the same network performance.

Following this approach for different combinations of SOX and MOY we could generate tables such as Table 1, where the ‘+’, ‘-‘ and ‘0’s are just qualitative indicators; actual numerical values should be determined via the simulation studies. Observe that introducing self-optimisation in the network of quality-oriented operator is likely to establish the highest OPEX gains, but the lowest CAPEX gains.

Table 1: Qualitative indicators for CAPEX and OPEX gains when introducing self-optimisation

2.4 SON development guidelines

While the issues addressed in the previous two sections are important prerequisites, the key challenges lie in the actual development of algorithms and methods for self-organisation. To this end, in the SOCRATES project we have formulated and followed concrete development guidelines consisting of roughly five consecutive steps.

1. First step is to make an overview of all radio parameters, either directly affecting radio network performance or characterising radio resource management algorithms, and of all foreseen use cases (self-organisation solutions) that are (mostly) centred around these radio parameters. Here, we refer to SOCRATES deliverable D2.1 [11].

2. Next, a thorough qualitative understanding of these radio parameters, the RRM mechanisms and the envisioned SON solutions should be generated, in order to establish so-called functional parameter groups, whose elements (radio parameters) are strongly related in the sense that their optimised tuning aims at contributing to the same objective (e.g., coverage, fairness, energy efficiency, …). Additionally, the typical or anticipated operational timescale of the different SON solutions need to be compared, such that, along with the functional parameter group categorisations, an educated estimate can be made regarding which SON solutions need to be developed in an integrated setting,

SOA

CA

PE

X

MOD

self-optimisation

manualoptimisation

assess e.g. OPEX/CAPEX gainsof SOA with regard to benchmark MOD

OPEX gain CAPEX gain

RE

SID

UA

L O

PE

X

CA

PE

X

OP

EX

SOA

CA

PE

X

MOD

self-optimisation

manualoptimisation

assess e.g. OPEX/CAPEX gainsof SOA with regard to benchmark MOD

OPEX gain CAPEX gain

RE

SID

UA

L O

PE

X

CA

PE

X

OP

EX

0++++++MOD

-0+++MOC

---0+MOB

------0MOA

SODSOCSOBSOA

0++++++MOD

-0+++MOC

---0+MOB

------0MOA

SODSOCSOBSOA

+

++

+++

++++

SODSOCSOBSOA

+

++

+++

++++

SODSOCSOBSOA

CAPEX GAINS OPEX GAINS

qualityorientedoperator

costorientedoperator

Page 23: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 23 (135)

and which (groups of) SON solutions operate rather independently and can hence be developed separately. See the SOCRATES deliverables D2.4 [21], D2.5 [51] and D2.6 [28].

3. Regardless of the outcome of this exercise, we recommend to start with the development of stand-alone, use case-specific solutions and ignore the inter-use case relations until these stand-alone developments have been done. In the SOCRATES development framework, this process comprises several substeps. See D2.4 [21].

The first substep is to make a reasonable choice for an operator policy and derive from this a suitable use case-specific objective function. This could, e.g., take the form of maximising cell capacity while providing some minimum degree of fairness. Associated with this objective function, a set of relevant performance metrics should be defined.

Next, based on a thorough qualitative understanding of the use case, formulate a range of potentially relevant scenarios in which the targeted solution should perform well. For instance, if a given use case targets the optimisation of admission control parameters, it makes sense to consider scenarios with gradually or abruptly varying traffic loads, as well as varying degrees of user mobility.

Then, an evaluation tool needs to be built to carry out controllability and algorithm assessment studies. It is of importance that the type of the evaluation tool, which may typically be a dynamic or Monte Carlo-based simulator, as well as the incorporated system, traffic, propagation and mobility models, are well suited to the considered use case.

Next, an extensive controllability and observability study needs to be done. Its aim is to determine which radio parameters are (most) effective in different scenarios in the sense of their contribution in optimising the objective function; a further aim is to determine which scenario aspects are most important to respond to with suitable parameter changes, in the sense that changes in those aspects (e.g., a change in user mobility) importantly require parameter adjustments, to avoid unnecessary performance degradations. In the observability study, it is studied to what extent and how scenario aspects (e.g., (changes in) traffic characteristics, service mix, degree of mobility) can be monitored and hence can be responded to by suitable SON functions, as well as to what extent and how desired performance aspects can be automatically monitored in a live network. As an example, some use case will require feedback on coverage, which is known to be hard to monitor. All evaluation studies are typically conducted with simulations. In the simulator development it is important to model all relevant scenario aspects at the appropriate time scale, i.e., that corresponding to the occurrence of the events triggers self-organisation actions, e.g., changes in the degree of mobility or the service mix.

Based on input from the earlier substeps, develop one or several implementable algorithms for the considered use case. Assess its performance in the pre-specified scenarios and in terms of the metrics that matter for the objective function. In order to determine the effectiveness of the developed solution, compare the performance with a reasonable reference case, e.g., one without self-optimisation but with a reasonable (fixed) radio parameter setting. Take care to test the developed algorithm in different scenarios in order to determine its robustness. This key step may typically require several iterations of algorithm development, performance assessment, algorithm adjustment, etc.

Compare the different (if so) developed algorithms for a given SON function in terms of the attained performance, robustness and implementation complexity, whether the latter refers to aspects such as the required changes to standards, required signalling load and the computational complexity. Select the ‘best’ solution.

4. Having ‘finalised’ the stand-alone development of SON solutions, and qualitatively understanding the strength of the inter-use case relations (typically due to an overlap in either applied control, parameters or targeted performance aspects), the next step is to consider the most closely related use cases in an integrated setting. This approach largely follows the same substeps as outlined above for the stand-alone development/assessment phase. Regarding the scenarios, however, it is stressed that particular attention is to be paid to those scenarios where potential conflicts may be expected to arise between the different SON solutions. In the performance assessment, the scenarios should be considered in the absence of SON solutions, applying only individual stand-alone SON solution, applying all (or pairs of) stand-alone SON solutions in parallel but without any form of coordination, and, lastly, applying all SON solutions in parallel with an appropriate form of coordination. The need for such coordination should arise from the assessments and, if this need indeed exists, different means of coordination should be developed and compared to determine the most suitable approach.

Page 24: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 24 (135)

The selected approach forms input for the SON coordinator, see Chapter 4, which will be defined to comprise additional roles beyond mere alignment of SON-requested radio parameters adjustments.

5. Eventually, once the most suitable SON-specific solutions are developed, which may be tightly integrated, loosely coordinated or uncoordinated versions of stand-alone solutions, the impact on protocols, measurements, signalling, interfaces, etc, is to be derived in order to influence the standardisation development such that the developed solutions can indeed be implemented in actual products.

 

 

 

 

 

Page 25: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 25 (135)

3 Individual SON Functions A key contribution of the SOCRATES project has been the development of algorithms and methods for self-organisation. In this chapter, the focus is on a selection of use cases with stand-alone SON functions.

At the beginning of the project, a total of twenty-four SON use cases have been defined and classified in three categories of self-organisation, i.e., self-configuration, self-optimisation and self-healing. The use cases were selected from NGMN recommendations, 3GPP documents and SON-related activities at the partners of the consortium. These use cases have extensively been described in [11] and the requirements put on solutions for self-organisation for these use cases have been analysed in detail in [20]. In order to structure the further work, nine use cases have been selected, based on criteria like the expected gains and feasibility of the use cases and their general importance for the partners of SOCRATES [21]. They have been the subject of extensive research and development activities. It concerns the nine use cases mentioned in Table 2.

Table 2: SON use cases selected by the SOCRATES project

Self-optimisation Admission control parameter optimisation Packet scheduling parameter optimisation Handover parameter optimisation Load balancing Interference coordination Self-optimisation of home eNodeB

Self-healing Cell outage management X-map estimation

Self-configuration Automatic generation of initial parameters for eNodeB insertion

Note that within the SOCRATES project, the use case X-map estimation has been classified in the self-healing category. Strictly speaking, X-map estimation is however not a typical SON use case, but rather a support function for different SON use cases.

In Section 3.1, first the situation in today's networks is described for each of the selected use cases, as this situation defines the starting point for the development of SON solutions for the specific use cases. Then, in Section 3.2, a summary of the results obtained for each the use cases is given. A more detailed description of the studies performed for each of the considered use cases can be found in Chapter 8. Concluding remarks are made in Section 3.3.

3.1 Use Cases – Situation in Today’s Networks

In this section, the situation in today's networks is described for the nine selected stand-alone use cases. This situation defined the starting point for the development of SON solutions for these use cases.

3.1.1 Admission Control Parameter Optimisation

The admission control algorithm is the algorithm that decides if a call request will be admitted to the network or not, based on the availability of the resources needed to guarantee the required quality of service (QoS) of the new call, while maintaining the QoS of the already accepted calls. Admission control algorithms are not standardised, so at network roll-out the operator may make a choice out of a number of algorithms offered by the vendor.

Several admission control algorithms have been defined in the literature. A review of some of the admission control approaches used is given in [8]. The diverse QoS requirements for delay-sensitive and delay-tolerant applications, the different traffic pattern of calls originating from different applications, the difference in Grade-of-Service (GoS) requirements for fresh and handover calls, etc. pose challenges in designing efficient admission control algorithms for future wireless networks. As a result, admission control algorithms can be complex, primarily due to a multitude of tuneable parameters. So the current approach is to configure the parameters at network roll-out, and only change them manually from time to time if some conditions change drastically (e.g., introduction of a new service). These manual changes will be made on a long-term basis (weeks, months), while changes in the traffic or mobility conditions of a cell can happen on a much shorter time-scale (minutes, hours).

Page 26: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 26 (135)

3.1.2 Packet Scheduling Parameter Optimisation

The packet scheduling algorithm coordinates the sharing of the radio resources in both the time and frequency domains. The general scheduling objective is to support the quality requirements of the different services as resource efficiently as possible, thereby considering, e.g., delay constraints for real-time services and fairness objectives, while exploiting multi-user and frequency diversity.

Packet scheduling algorithms are not standardised. At network roll-out the operator chooses a scheduling algorithm and the values for its parameters are chosen and manually configured based on off-line simulation studies. The choices are periodically evaluated to verify whether they still match the actual traffic characteristics, the service mix and the operator policies. User quality feedback from the network (measurements) is also used as input for the periodical evaluation. In a future situation in which QoS differentiation is implemented in the network, besides normal scheduling parameters, the operator would have to make choices for the QoS differentiation parameters, too. The choice would probably be based on off-line studies. The optimal parameters will be manually configured in the network and the operator will revise from time to time the match of the chosen parameters with the typical system conditions (traffic characteristics, service mix, subscription mix, and operator policies).

3.1.3 Handover Parameter Optimisation

Handover parameter optimisation aims at minimising the occurrence of undesirable effects related to the handover procedure, such as call drops and ping-pong effects between two cells. This is done by adjusting parameters steering the handover, such as neighbour specific threshold and hysteresis parameters. The configuration of handover parameters in order to find an appropriate trade-off between handover failure rate and the number of ping-pong handovers is one example where operators today spend a lot of time on manually tuning the parameters. Reconfigurations are usually triggered upon detection of handover problems or upon installation of a new base station in the network.

3.1.4 Load Balancing

Load imbalance is a common problem in communication networks where a large number of independent users are served. Users in highly loaded cells may not achieve the required quality and would benefit from being handed over to neighbouring less loaded cells. The load can be moved by for example changing the handover offset or antenna parameters. As load balancing is dependent on the handover and handover related settings, the function is strongly related to handover optimisation. Today, load imbalance is addressed manually. For UMTS the load is transferred smoothly by the soft handover functionality (which is to be set up manually), while for GSM, the load is controlled manually by adjusting control parameters for several BTSs simultaneously from the BSC. The disadvantage of performing the control from the BSC is that the BSC does not have accurate information regarding the BTS characteristics, such as for example the link performance.

3.1.5 Interference Coordination

Interference coordination aims at minimising the interference experienced by base stations and mobile stations in the network. The techniques are different for different radio access technologies. For GSM, interference coordination is performed in the planning process, by avoiding the use of the same frequency in neighbouring cells. A planning tool is often used, but also manual work is needed such as providing input to the planning tool and possibly also verifying the results by performing drive tests. In addition, avoiding using the same frequency in neighbouring cells gives low frequency reuse and also results in relatively low capacity. For UMTS, interference coordination is performed on a short time scale by the power control functionality which assigns each user just enough power to satisfy that user’s service request. Further, the tilt of antennas can be adjusted on a large time scale in case high interference between two cells is detected. This is a manual task and might even require a site visit unless electronic tilt is available. As can been seen, interference coordination of today’s networks is working on a long term and requires time-consuming manual work.

3.1.6 Self-Optimisation of Home eNodeB

In E-UTRAN an extensive use of home eNodeBs is foreseen. Home eNodeBs will be used to improve or create coverage and/or capacity in limited areas. The deployment of home eNodeBs offers new challenges. Customers installing home eNodeBs may not have the knowledge to perform configuration and management themselves and manual configuration and management performed by the operator will not be possible due to the expected large amount of home eNodeBs. A wide deployment of home base stations without introducing automatic configuration and optimisation may result in high interference in the network, especially in the case of closed access home eNodeBs, i.e., home eNodeBs open only for certain users. The small coverage area of a home eNodeB also requires adjustments of handover parameters in order to provide seamless handover.

Page 27: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 27 (135)

Some operators, like for example Vodafone UK, are already today deploying UMTS home base stations that can be installed by the customer thanks to SON functionality. In some cases the home base station is deployed on a separate carrier. It is unknown how well these home base stations would perform if deployed on the same carrier as the macro network or if the home base station density would become high.

3.1.7 Cell Outage Management

Cell outages are today detected by the operations & management system (O&M) through performance counters and/or alarms. In some cases we have so-called “sleeping cells”, i.e., cells that are not alarmed but for various reasons perform poorly (e.g., they do not carry any traffic). These sleeping cells may not be detected for hours or even days since such detection nowadays relies heavily on long term performance analyses and/or subscriber complaints. Further, detection and the corresponding root cause analysis is in some cases done manually, which is not only time-consuming but also costly. It is not fully clear to what degree every operator actively compensate for the cell(s) in outage. If such compensation is carried out, then this is most likely a manual process and no/little automated means are used. To verify the success of the compensation, manual drive testing is necessary.

3.1.8 X-Map Estimation

Today operators typically resort to planning tools to dimension and plan their networks. Current radio network planning tools are based on digital maps with topographic and clutter information as well as on tuned path loss prediction models. The approach based on these planning tools and predictions is, however, not very accurate. Reasons for the inaccuracies are imperfections in the used geographic data, simplifications or approximations in the applied propagation model as well as changes in the environment. Furthermore, changes in traffic distribution and user profiles are a source for inaccurate prediction results. These shortcomings force operators to continuously optimise their networks using measurements and statistics, and to perform drive/walk tests. Drive/walk tests provide a picture of end user perception in the field and enable the operator to identify locations causing poor performance and their corresponding cause. Drive/walk tests are, however, not ideal since only a limited part of the network can be analysed due to access restrictions and the cost and time involved. Furthermore, only a snapshot in time of the conditions in the network is captured.

3.1.9 Automatic Generation of Initial Parameters for eNodeB Insertion

The automatic generation of initial parameters for eNodeB insertion (AGP) is a use case in the domain of self-configuration of the network. The goal of AGP is to provide situation- and location-specific parameter sets for network elements at the time of their insertion into the network. This is to ease and accelerate network expansion, to improve the quality of the network, and to reduce the need for subsequent (manual) network optimisation. Within the SOCRATES project, the focus of AGP is on initial radio configuration parameters for eNodeBs such as cell parameters, transmission power, neighbour relationships, X2 interface configuration, and antenna parameters. In present 2G and 3G networks, basically no self-configuration of radio parameters is available. The corresponding base station parameters mentioned above are determined in an elaborate planning process. Depending on the planning processes, the initial parameters for the antenna configuration may be determined as early as at the time of first introducing a new site into the planning environment. This may be several months prior to the actual deployment. Moreover, the initial parameters are often taken from a template, and the actual adaptation of the new base station to its environment is carried out as part of (manual) network optimisation.

In present systems the process of the initial re-configuration/optimisation of a newly integrated base station may require several weeks, starting with the collection of performance information through a (largely manual) analysis of the short-comings and the development of an optimised target configuration to the final deployment of the new configuration and the following quality assurance steps. Furthermore, it is common practice that only the parameters of the new base station are updated after its deployment, while the surrounding base stations are only adapted with respect to the neighbourhood configuration.

3.2 SON Solutions Developed in SOCRATES

In this section, the main contributions from the nine use case studies are summarised. New self-organisation algorithms have been defined and developed for several of these use cases. For other use cases thorough investigations have been performed, forming key steps towards the development of actual self-organisation algorithms or illustrating that developing a self-organisation algorithm for these use cases is not justified. Focus in this section is on stand-alone functionalities, meaning that dedicated studies were performed on the respective use cases. The simultaneous operation of the developed SON solutions is considered in Chapter 0.

Page 28: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 28 (135)

It is needless to say that conclusions drawn for the use cases are always based on the scenarios, conditions, environments and other assumptions in which the algorithms are tested, and on the choice made for the (static) reference algorithm in cases where this algorithm is not standardised, but vendor specific. Although it is likely that most of the obtained conclusions are more generally valid than in the circumstances under which they were tested, some reservations have to be made on this and additional validation is recommended before applying the developed SON solutions in operational networks.

A more elaborated overview of the use cases discussing for example also the relevant KPIs and control parameters and showing numerical results can be found in Chapter 8.

3.2.1 Admission Control Parameter Optimisation

In the admission control parameter optimisation use case, a ‘traditional’ (static) reference algorithm which prioritises the admission of handover calls over fresh calls using a parameter ThHO has been defined. In a sensitivity analysis, the performance of the algorithm for various settings of the ThHO parameter has been examined for a variety of call arrival rates and fractions of handover calls. The results of this analysis show that changes in the measured performance might require opposite adaptations of ThHO, depending on which target performance measure is considered (see Figure 8). To handle this trade-off, an operator policy has been defined that prioritises the performance measures, i.e, ordered from highest to lowest priority: (i) good QoS for the already accepted calls; (ii) low rejection ratio of the handover calls; (iii) low rejection ratio of the fresh calls.

Relying on the outcomes of the sensitivity analysis and by taking the defined operator policy into account, a SON algorithm has been developed with the objective of adapting the resources allocated to handover calls on a cell-by-cell basis, based on measured performance. The SON algorithm auto-tunes the ThHO parameter, where every minute it is reconsidered if an update to ThHO needs to be made. The algorithm has been evaluated in simulation scenarios where at a certain moment in time a change in the call arrival rate and/or the fraction of handover calls is introduced. Comparison of the results obtained with the self-optimisation algorithm and the results obtained with the static algorithm for various fixed settings of ThHO show that the proposed admission control optimisation algorithm succeeds better in complying to the defined operator policy, both before and after the change in the incoming traffic.

Details and results for the admission control use case can be found in Section 8.1.

Figure 8: Schematic view of cell capacity sharing by fresh and handover calls. The reference algorithm prioritises the admission of handover calls over fresh calls using a parameter ThHO.

Depending on which performance measure is considered, opposite adaptations of ThHO might be required

3.2.2 Packet Scheduling Parameter Optimisation

The packet scheduling use case concentrated on the scheduling of data in the downlink direction, i.e., from base station to the user terminals. The key reason for this focus is the typical up-/downlink traffic asymmetry, which makes the downlink direction the typical performance and capacity bottleneck and therefore the most relevant transmission direction for the implementation of self-optimising solutions.

A ‘reference’ (downlink) packet scheduling algorithm has been developed which contains elements of proportional fairness and packet urgency, and which supports mixes of real-time and non-real-time traffic. A thorough sensitivity analysis has been performed which investigates how packet scheduling performance depends on actual system and traffic characteristics regarding various user, traffic and environment characteristics. The aim of this analysis is to answer the particular relevant question how the ‘optimal’ setting of the scheduling parameters has to be adapted when one or more of these system or traffic conditions change over time.

Page 29: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 29 (135)

The sensitivity analysis shows that the optimal parameter settings of the scheduler are not very sensitive to changes in the data traffic characteristics, the multipath fading environment, the differences in average signal strength among calls and the service mix. Rather, it is observed that a single, robust setting of the scheduling parameters exists, which provides near optimal trade-offs under almost all practically relevant conditions. It is concluded that the estimated potential gain of applying self-optimisation to the packet scheduling parameters is not very large. This is largely due to the inherently adaptive nature of the considered reference scheduler, in the sense that it automatically responds to variations in, e.g., channel quality or traffic load, while also, e.g., the resource split between different services is automatically adapted to the current mix of, e.g., real-time and non-real-time calls.

As a consequence of this obtained insight, no development of an algorithm for self-optimisation of the packet scheduling parameters has been pursued in SOCRATES.

Details on the packet scheduling use case and results of the performed sensitivity analysis can be found in Section 8.2.

3.2.3 Handover Parameter Optimisation

Existing handover parameter optimisation strategies are based on error reports and hence rarely optimised in today’s networks. In practice each cell experiences different handover statistics depending on the environment and user distribution in the handover area (see Figure 9). A handover parameter optimisation SON algorithm has the objective of continuously adapting the handover parameters on a cell basis based on the observations of the handover events. The algorithms adapt over the period of minutes. The outcome from the SON has been demonstrated to deliver a higher system performance.

The handover parameter optimisation use case considers the optimisation of the handover parameters hysteresis and time-to-trigger (TTT). Three different optimisation algorithms have been developed in this use case, called: (i) trend-based handover optimisation algorithm, (ii) simplified trend-based handover optimisation algorithm, and (iii) HPI (handover performance indicator) -sum based handover optimisation algorithm. All three algorithms aim at finding a good handover operating point (combination of hysteresis and TTT values) for the current network conditions. They all analyse the current handover performance in the network based on measurements of three HPIs: the call drop ratio (ratio of existing calls that are dropped before they are finished), the ping-pong handover ratio (ratio of handed over calls that are handed back in less than a critical time) and the handover failure ratio (ratio of failed handovers). Figure 9 gives an overview of the handover parameters and shows the handover area that is influenced by the parameters. The user 1 (UE1) moves from the SeNB to the TeNB as indicated by the yellow arrow. The success of the handover attempt depends on the radio link quality in the handover area.

Figure 9: A handover event and the handover parameters

The trend-based handover optimisation algorithms assume that a handover operating point region of good (low) HPI performance exists, where the HPIs can be levelled out according to preset weighting

SeNB TeNB

Hysteresis

Rec

eive

d si

gnal

str

engt

h

Distance

SeNB TeNB

HO optimisation

TTT

Legend

Hysteresis

Time-to-Trigger

Connected Cell

Interfering Cell

User (UE)

Hysteresis Offset

eNodeB

Handover area

UE1

Page 30: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 30 (135)

parameters. Simulations show that the algorithm works well in scenarios where such a region exists. If such a region however does not exist, the optimisation may fail and that results in extreme handover parameter settings.

The HPI-sum based handover optimisation algorithm (HSBHOA) bases its optimisation actions on a combined HPI statistic. This allows the optimisation algorithm to optimise the handover performance even in the case that high HPI values are observed in all handover operating points. The algorithm has been tested in several simulation scenarios and simulation environments. The simulations included hexagonal, heterogeneous and realistic network layouts as well as equal user distribution, user concentration and speed changing simulation scenarios. It has been shown that the algorithm improves the handover performance significantly in all these simulations. A drawback of the algorithm is that the handover operating point is changed continuously in the optimisation even if the “best” performing operating point is selected already. In this case the selected handover operating point would ping-pong between the “best” and neighbouring handover operating points. However, changing the handover operating point is necessary anyway if the HPIs for another handover operating point shall be investigated.

Details and results for the handover parameter optimisation use case can be found in Section 8.3.

3.2.4 Load Balancing

In the load balancing use case, an algorithm for load balancing based on handover offset modification has been proposed and evaluated. The load balancing algorithm aims to spread the traffic across neighbouring cells such that constraints in the system arising from baseband processing or limited transmission resources (i.e., resource blocks) are reduced. This balancing is performed whilst there is a sufficient RF margin (power and/or interference rise) and transmission resources are available. To achieve load balancing, coverage modifications may be needed. Additional artificial cell overlaying may be achieved by increasing the handover offset between cells. A larger handover offset virtually shrinks the overloaded cell and extends the coverage area of the less loaded neighbour cell.

Based on knowledge of the load situation at the target eNodeB (TeNB), the serving eNodeB (SeNB) needs to estimate the appropriate number of users that can be handed over to the target eNodeB (TeNB). Since users will generate a different load in the TeNB than they did in the SeNB before the handover (see Figure 10), a method based on SINR estimation has been proposed to predict the uplink and downlink load at the TeNB. This proposed method only needs measurements available at the UE or at the eNodeB.

Further, the load balancing algorithm needs to decide which neighbour cell to choose as TeNB, and how much the handover offset should be adjusted to achieve the highest network performance gain. After investigating an approach which focuses on how to transfer load to a low number of target cells and an approach which redistributes load to a higher number of target cells but with lower handover offsets, this last approach has been chosen and applied in the developed load balancing algorithm.

Simulations which consider the impact of traffic load, user mobility, service type and network layout have been performed, to assess the performance of the developed algorithm. The results show that in most of the simulated scenarios, the load balancing algorithm improves the network capacity by reducing the number of unsatisfied users in the network.

Details and exemplary results for the load balancing use case can be found in Section 8.4.

Page 31: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 31 (135)

Figure 10: Signals received at the eNBs, a) before handover, b) after handover. Before the handover, S1 is the received signal at the SeNB, and S2 contributes to the interference at the TeNB.

After the handover, S1 contributes to the interference at the former SeNB, and S2 is the received signal at the new SeNB

3.2.5 Interference Coordination

The interference coordination use case aims at decreasing the interference in the network and by that increasing the system performance in terms of higher data throughput and higher network capacity. This is because each user will need less bandwidth when the signal to interference and noise ratio increases.

In this use case, in order to investigate the implications of interference, a number of statically deployed soft frequency reuse schemes have been considered under three scheduling strategies, i.e., giving the boosted sub band resources to (i) the UEs with worst SINR, (ii) the UEs with best SINR, (iii) a random selection of UEs. Simulation experiments which examined the total cell throughput and the 5th percentile throughput confirmed that a trade-off exists between these. The results demonstrated that for a given scheduler bias, static soft reuse will give a better 5th percentile throughput than reuse one, at the cost of reduced total throughput relative to reuse one. It cannot be said however, that soft-reuse alone offers this trade-off, since the question whether giving even more resources to the cell-edge UEs under reuse one would lead to the same trade-off, has not been examined.

Only statically applied soft-reuse schemes were examined, as the current results do not indicate that any additional benefit could be obtained from applying SON.

As the results from this use case were limited, no further details are included in this report. However, further details and results for the interference coordination use case can be found in [54].

3.2.6 Self-Optimisation of Home eNodeB

In the self-optimisation of home eNodeB (HeNB) use case, two sub-use cases have been considered: (i) interference and coverage optimisation, and (ii) handover to and from home eNodeBs.

The aim of the HeNB interference and coverage self-optimisation is to provide HeNB coverage while minimising the interference on the macro network. In a controllability study, the effect of varying transmit powers is examined for different macro site-to-site distances and different macro to HeNB distances. It is seen that a major problem with closed access home eNodeBs deployed using the same frequency as the macro eNodeB is the existence of so called deadzones (see Figure 11), i.e., areas where non-closed subscriber group (non-CSG) users cannot access the macro network due to interference caused by the HeNBs. The HeNB transmit power is identified as a suitable parameter to control the trade-off between the HeNB coverage and the size of these deadzones. Based on the results of the controllability study, a self-optimisation algorithm for the HeNB maximum transmit power has been developed and evaluated. The algorithm estimates a desired HeNB maximum transmit power such that CSG UEs have coverage within the considered house, and such that non-CSG UEs have macro coverage in the house to as large extent as possible. The estimate is based on achievable SINR, calculated using downlink RSRP and RSSI measurements performed in the HeNB and estimates of the maximum and minimum pathloss from the HeNB within the house. The HeNB maximum transmit power is then changed in steps towards

Page 32: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 32 (135)

the desired power, and the desired power is recalculated for every step, so that any changes in traffic and/or the radio environment are taken into account. The assessment of this algorithm shows that with the HeNB deployed relatively close to a macro eNodeB, the highest allowed maximum transmit power must be used in order to provide HeNB coverage in that particular location. In situations where the HeNB is placed further away from the closest macro eNodeB, it is possible to set the HeNB maximum transmit power lower and still provide coverage in the HeNB house. In such scenario, the size of the dead-zone can be decreased, while the HeNB still provides coverage for the CSG users in the house.

For the HeNB handover self-optimisation, the aim is to provide seamless handover between HeNBs and from a HeNB to a macro eNodeB and vice versa. In a controllability study considering open access HeNBs, the effect of varying the hysteresis and time-to-trigger values is considered. Simulation results show that the impact of these handover parameters depends on UE speed as well as the distance from the macro eNodeB and macro network load. No detailed self-optimisation algorithms were developed for the handover sub-use case, but concepts for an algorithm have been considered.

Details and exemplary results for the home eNodeB use case can be found in Section 8.5.

Figure 11: Illustration of a deadzone

3.2.7 Cell Outage Management

The goal of cell outage management (COM) is to minimise the network performance degradation when a cell is in outage through quick detection and compensation measures. Cell outage management consists of several functions, namely, cell outage detection, compensation, and X-map estimation (see Figure 12). Cell outage detection is used to timely detect outages that are not alarmed. Cell outage compensation is carried out by automatic adjustment of network parameters in surrounding cells in order to meet the operator’s performance requirements based on coverage and other quality indicators, e.g., throughput, to the largest possible extent. X-Map estimation finally aims at making manual drive testing dispensable through automatically calculating the changed coverage after compensation. For this purpose simulation results as well as geo-location based measurements from use equipments can be used (see Section 3.2.8).

Figure 12: Cell outage management consists of the functions cell outage detection, compensation and X-map estimation. In this figure, the centre site is in outage, the red area indicates the previous

coverage of the outage cell

The cell outage management use case has focussed on the development of a set of cell outage compensation algorithms. In an extensive controllability study, first the compensation potential of several control parameters in different scenarios has been assessed. It has been shown that COM is able to compensate coverage holes. However, this comes with a cost of a degradation of the quality. To trade-off

MeasurementsDetection

CompensationOperator policy

Control parameters

X‐map estimation

O&M

Page 33: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 33 (135)

coverage and quality, an operator policy has been developed. It has been observed that the antenna tilt and P0, the uplink target received power level, are the most effective control parameters in case of a primarily coverage-oriented policy. When the policy is predominantly quality-oriented, optimisation of P0 is the most effective.

The insights of the controllability study have formed the basis for the development and assessment of on-line algorithms for cell outage compensation. The general approach followed by the algorithms is that the DL and UL quality is continuously monitored and fed back along with other measurements that enable tuning of P0 and tilt, to a control parameter tuning algorithm. That algorithm computes appropriate P0 and tilt such that DL and UL quality is not smaller than the outage quality targets given by the operator policy.

Performance assessment of the developed algorithms is undertaken by evaluating the algorithms under different conditions, namely (i) varying load and inter-site distance, (ii) compensation groups, and (iii) varying outage quality targets. Simulations show that dependent on the considered scenarios in terms of, e.g., traffic load, up to 85% of the users can be recovered at the cost of an acceptably reduced quality in the compensating cells.

Details and exemplary results for the COM use case can be found in Section 8.6.

3.2.8 X-Map Estimation

In X-map estimation, the user equipment (UE) in the network is used to report the observed service quality along with the locations where the measurements are taken. These UE reports can be used by the so-called X-map estimation function to estimate the spatial network performance. The information embedded in an X-map may be used by, e.g., SON functionalities that address the optimisation of coverage, capacity and quality. In the X-map estimation use case, the concept proposal for X-map estimation has been introduced and the influence of the positioning accuracy, of the UE measurement accuracy as well as of the number of measurements taken on the accuracy of the X-map has been analysed.

Two different X-map estimation methods have been considered as shown in Figure 13 and characterised in terms of their accuracy.

Figure 13: Overview of the X-map estimation approach

The UE and RAN measurement data together with the position of the UE are delivered to the X-map estimation function which uses this data directly for updating the corresponding bins in the X-map (approach 1), or for calibrating a propagation model (approach 2). It has been shown that approach 1 is very accurate since the measurement data are used directly for deriving an X-map. However, information about the network performance can only be provided for those parts where measurement data are available, which is an important issue if the number of available measurements is decreased. Furthermore, this approach strongly depends on the positioning accuracy. In contrast, approach 2 provides a complete picture of the network performance for the whole considered area and is relatively insensitive to the applied positioning method, the possible UE inaccuracies and the number of measurements taken. However, approach 2 is less accurate than approach 1 although the accuracy is improved by the calibration compared to the uncalibrated prediction.

UE Location and Measurement Unit (LMU)

Bin UpdatePrediction

Data

UE/RAN Measurement, Time, Position

Measurement, Time, Location Data

RAN Measurement Unit (RMU)

UE1 UEn

Propagation Model Calibration

CalibratedParameters

Propagation Model

X-Map EstimationFunction

approach 1: approach 2:

X-Map

Page 34: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 34 (135)

Details and results for the X-map estimation use case can be found in Section 8.7.

3.2.9 Automatic Generation of Initial Parameters for eNodeB Insertion

The automatic generation of initial parameters for eNode insertion (AGP) use case deals with the process of integrating new network elements (NEs) into an operational network. The novel concept of soft integration of network elements has been proposed, and applied for a smooth integration of an eNodeB into a live network.

The idea behind soft integration is to make intentional changes in the network configuration as little disruptive as possible. A soft integration approach comprising a sequence of steps has been proposed for the self-configuration of a new eNodeB with respect to the radio parameters maximum transmit power and (electrical) antenna tilt. In the course of these steps, the eNodeB is playing an increasingly noticeable role within its vicinity in the network. This should enable surrounding eNodeBs to adapt by means of self-organisation.

Based on the SOCRATES scenario for a European city, a demonstrator for the AGP functionality has been devised. It demonstrates the impact of the proposed AGP functionality over the course of three days during which a typical fluctuation of the traffic intensity is simulated.

The proposed approach for AGP noticeably reduces the effort that an operator will need to invest into planning and configuring parameters for new equipment. Moreover, the parameter values at the end of the integration are already optimised to the specific local environment and the immediate neighbours are, in fact, adapted as well. This will result in much better post-deployment performance than known from today’s network operation and contribute to an overall improved quality level.

Details for the AGP use case and a sequence of screenshots from the developed simulator can be found in Section 8.8.

3.3 Conclusions on Individual SON Functions

In this chapter, the main results obtained in the SOCRATES project for nine stand-alone SON use cases have been given. For these use cases, also the situation in today's networks has briefly been summarised.

For the use cases admission control parameter optimisation, handover parameter optimisation, load balancing, interference and coverage optimisation of home eNodeBs, cell outage management and automatic generation of initial parameters for eNodeB insertion, new self-organisation algorithms have been developed. No detailed algorithm was developed for the use case handover to and from home eNodeBs, but concepts for a SON algorithm have been provided. For the use cases packet scheduling parameter optimisation and interference coordination, it has been concluded from our investigations that developing self-organising algorithms is not justified.

For all use cases, simulators have been designed and implemented, which were very valuable for performing observability and controllability studies and for assessing the performance of the developed self-organisation solutions under different conditions and in different scenarios with various user, traffic and environment characteristics.

In all use cases, trade-offs between KPIs relevant for the use case exist. To give just a few examples: the trade-off between the call drop ratio and the ping-pong handover ratio in the handover use case, between the cell edge throughput and the total cell throughput in the interference coordination use case, between the home eNodeB coverage and the size of the created dead-zones in the interference and coverage optimisation of home eNodeBs use case. In all use cases, suitable parameters for controlling these trade-offs have been identified, operator policies that define how these trade-offs could be handled have been considered and observability and controllability studies have been performed. Although also in today's (non-SON) networks the aim is to handle these trade-offs through periodically evaluating and changing parameter settings, these manual changes will typically be made on a long-term basis (weeks, months), making it difficult to account for changes in for example the traffic or mobility conditions, which happen on a much shorter time-scale. The basic idea behind the developed SON algorithms is that one or more control parameters are auto-tuned based on feedback on the system performance obtained via measurements. Because the automatic tuning of the parameters can be done on a much shorter time scale, the system will react to changes sooner, such that the time-varying optimisation potential can be exploited.

Assessment of the developed algorithms by considering the relevant KPIs for the use cases, shows most of the time a considerable improvement of the KPI(s) that were considered most important to optimise by the operator policy, at the cost of an acceptable decreased performance of the other KPI(s) involved in the trade-off. Note that due to the trade-offs in the gains, it is not always clear what 'good' performance is,

Page 35: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 35 (135)

and this might also differ from operator to operator (e.g., a coverage-oriented operator versus a quality-oriented operator, see also Section 2.3). So defining a clear operator policy which is taken into account by the developed SON algorithms is important.

Typically, two important benefits for the introduction of SON are mentioned: performance improvements and OPEX savings. Note that in all studies reported on in this chapter, the focus has mainly been on the performance aspect. Reason for this is that it is easier to compare the performance differences between the cases where a SON solution is enabled or disabled than the OPEX differences, because it is hard to quantify the cost for manual tasks. The impact of the introduction of SON on OPEX is considered in Chapter 5.

Page 36: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 36 (135)

Page 37: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 37 (135)

4 Simultaneous Operation of SON Functions So far, the work on SON has focused mostly on the development of stand-alone functionalities. It may be the case that, with the increasing deployment of SON functions, the number of conflicts and dependencies between these independent solutions increases. Such conflicts can occur if, for example, two different individual SON functions (e.g., Cell Outage Management and Interference Coordination) aim at different goals when optimising the same parameter (e.g., antenna tilt) at an NE, or if the modification of a control parameter by one SON function influences the operation of other SON functions. Such conflicts may cause the whole system to operate far from optimal behaviour, with a negative impact on the operator’s overall objectives on performance and user satisfaction. Thus, the avoidance and/or resolution of conflicts and dependencies between the functions is beneficial. An example giving an overview of potentially conflicting parameter modifications through SON functions is depicted in Figure 14.

Figure 14: Potential of control parameter conflicts due to modification through SON functions

In this chapter, the SOCRATES activities on simultaneous operation of SON functions are described. In Section 4.1 three exemplary integrated use cases, i.e., SON use cases that have a strong interrelation regarding the involved control parameters, are introduced and summarised. Section 4.2 presents a more general framework approach for SON coordination that addresses the avoidance, detection and resolution of conflicts that can occur between different individual SON functions. This framework is intended to be a more general one, which can be used for other integration uses then described in Section 4.1. This section also includes the results of an exemplary case study on the coordination between two individual SON functions. Section 4.3 discusses the approach and impact of the framework as well as the results of the case study, and Section 4.4 provides some concluding remarks on the coordination of SON functions.

4.1 Integrated Use Cases

In this section a short description of the integration use cases, i.e., the pairs of SON use cases that have a strong interrelation regarding the adjusted control parameters, are described. A detailed description of these use cases can be found in Chapter 9.

4.1.1 Admission Control and Handover Parameter Optimisation

In the admission control and handover parameter optimisation integration use case, the interaction and the need for integration of the admission control and handover parameter optimisation use cases that were developed in the SOCRATES project have been investigated. As the optimisation algorithms do not control common parameters, interaction is expected when actions taken by one of the algorithms affect

HeNBCov./Int.

Control Parameters

SON Functions

Characteristics

System

Coverage Capacity

PM

CM

Availability Reliability

Interaction handling

HeNBHO Opt.

Time to Trigger

HO Offset

HO Hyster.

Admiss. Thresh.

Sched. Param.

BeamForm

AntennaTilt

UplinkP0

DL TxPower

DL TxPower per RB

Neigh-bourList

Admiss. Control

Packet Sched.

AGP COM / COC

Interf. Coord.

LoadBalanc.

HO Optim.

Metrics

Page 38: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 38 (135)

one or more of the input measurements of the other algorithm and cause undesired effects like oscillations of parameters. In order to investigate the possible interaction between both optimisation algorithms, scenarios were defined in which interaction is expected to occur. These scenarios were required since in case calls are rejected by the admission control algorithm of a target cell, they have to find another target cell (see Figure 15), but this will most likely occur in case of an overload in a handover target cell. Therefore, the scenarios that were chosen all contain changes in either user velocity, load or both that can lead to overload.

Figure 15: The implementation specific part of the handover algorithm showing the relation with admission control: each handover call is subjected to admission control in the target cell

Simulation results show that there is only little negative interaction between the admission control and handover parameter optimisation algorithms. Applying admission control parameter optimisation however improves the handover performance considerably. By ensuring that there is sufficient capacity for handover calls, the call drop ratio is reduced substantially because handover calls can more easily find an handover target cell and, thus, handovers can be performed timelier.

A detailed description of the admission control and handover parameter optimisation integration use case can be found in Section 9.3.

4.1.2 Macro and Home eNodeB Handover Parameter Optimisation

The initial studies of handover parameter optimisation performed in SOCRATES, described in Chapter 3.2 focused on handover in a macro eNodeB network (handover parameter optimisation use case) and on handover to and from HeNBs (self-optimisation of HeNBs use case) separately. In a network with both macro eNBs and HeNBs however, deploying isolated and independent handover self-optimisation considering macro eNBs and HeNBs separately may lead to conflicts or contradictions in the optimisation actions, or sub-optimal performance due to misalignment of the self-optimisation algorithms. To investigate the benefits of coordinated handover self-optimisation between the macro- and HeNB layer, an integrated macro and home eNodeB handover parameter optimisation algorithm was developed and evaluated using simulations. The integrated algorithm is based on a simplified, trend-based, macro handover optimisation algorithm, changing the hysterisis based on the HPIs call drop ratio (CDR) and ping-pong handover ratio (PPHR), but the integrated algorithm also takes the target cell type into account, i.e., whether the handover target cell is a HeNB or a macro eNB. Further, in addition to changing the hysteresis, the integrated algorithm changes the cell individual offset (CIO) in order to differentiate the handover settings for HeNB target cells and macro eNB target cells. Simulations show that the integrated algorithm can decrease the CDR for macro-macro handovers at the cost of an increased macro-macro PPHR. The HeNB-HeNB CDR can also be decreased to some extent, but still remains rather high, which is probably due to the scenario with a high HeNB density. For the given scenario, the results do not show any performance improvement of the integrated handover optimisation algorithm compared to the running the simplified, trend-based, macro handover optimisation algorithm in all eNodeBs. Although no

Page 39: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 39 (135)

gains with coordination are seen in this case, there could still be benefits from coordinating the macro and home eNodeB handover parameter optimisation in other cases. In a possible further study, additional scenarios should also be considered, e.g., different macro cell sizes, different HeNB distribution, lower UE speed etc. in order to see if there are conditions where there is benefit from the integrated self-optimisation algorithm for the handover parameters. Adaptation of TTT values, in addition to the tuning of hysteresis and CIO, should also be considered. Particularly for HeNB-HeNB handover, low TTT values may be beneficial. For more information on the macro and home eNodeB handover parameter optimisation, see Section 9.2.

Figure 16: Integrated handover parameter optimisation algorithm

4.1.3 Handover Parameter Optimisation and Load Balancing

In handover parameter optimisation (HPO, see Section 3.2.3), the number of radio link failures, handover failures and ping-pong handovers are reduced by adjusting the individual handover parameters for every cell in the network. The handover optimisation algorithm is executed locally at cell level and only uses local handover performance indicators for the optimisation. The load balancing (LB) algorithm (see Section 3.2.4) is activated if a certain cell is overloaded, i.e., a certain load level is reached in a cell. In order to avoid link level failures and unnecessary blocking, some of the cell edge users are handed over to neighbouring cells before the load level reaches the maximum. The algorithm assures that the best suited neighbouring cells are selected for load balancing if the load of their neighbours is considered for the optimisation.

The HPO and LB algorithms influence different control parameters for the optimisation. The hysteresis (Hys) and time-to-Trigger (TTT) are changed by the HPO algorithm, whereas the LB algorithm adapts the HO offset (HOoff) only. However, the consequence of Hys and HOoff control parameter changes on the network performance is similar for these two optimisation algorithms since both influence the HO process and neighbour cell penetration.

The aim of this integration use case is to analyse the interaction of the two optimisation algorithms when running them simultaneously. The influence of the control parameters that have similar effects on the system performance and the possible counteraction of the two optimisation algorithms are the key challenges for a successful implementation in a real network. It has to be avoided that the HPO algorithm for example decreases the Hys value of a target cell for load balancing users again in order to counteract the increasing radio link failure ratio. In this case the users would be handed back to the overloaded cell after some time and the overall performance of the system would decrease again. Based on a theoretical examination of possible interactions between the self-optimisation algorithms, the performance of a network is examined in system level simulations. Subsequently, different approaches to coordinate the HO and LB SON algorithms are described and the influence on the overall system performance is investigated.

The detailed description of the HPO and LB integration use case can be found in Section 9.1. The HPO and LB integrated use case has furthermore been taken to conduct the case study that exemplary points out the advantage of implementing a SON Coordinator, see Section 4.2.3 for further details.

Hys

HeNB CIO

Macro CIO

CDR, HeNB target cells

Integrated handover parameter optimisation

CDR, macro target cells

PPHR, HeNB target cells

PPHR, macro target cells

Algorithm Control Parameters KPIs

Page 40: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 40 (135)

Since the HPO and LB integrated use case has been taken to conduct the case study that exemplary points out the advantage of implementing a SON Coordinator (see Section 4.2.3 for further details). The detailed description of the HPO and LB integration use case can be found in Section 9.1.

4.2 SON Coordination

The challenge for a SON Coordinator framework is to ensure that the individual SON functions jointly work towards the same goal, formulated by the operator’s high-level objectives. This can be achieved by effectively and appropriately harmonising policies and control actions of the SON functions. A SON Coordinator should furthermore ensure that the individual SON functions work with a common set of performance targets and measurements, and enable the operator to control the overall SON system through a single interface and thereby minimise the operational effort (see Figure 17). This also includes the detection and correction of unexpected and undesirable network behaviour caused by SON functions.

Figure 17: A SON system, comprising a number of SON functions interfacing with the operator and network subsystem via the SON Coordinator

4.2.1 Basic Definitions

Conflict Types

A distinction is made between two types of conflicts due to the integration of multiple SON functions: control parameter conflicts, and conflicts due to observability dependencies.

A control parameter conflict occurs if at least two SON functions require a change with different values on the same instance of a control parameter (for example, an attribute value of a managed object). A control parameter conflict can be subdivided into directionality conflict and magnitude conflict. A directionality conflict occurs if SON function A wants a certain parameter to increase while SON function B wants a decrease. A magnitude conflict occurs when SON function A wants a large increase for a certain parameter, while SON function B wants to cautiously allow only gradual increments while monitoring the effects.

The second type of interaction is a conflict due to dependencies in the observation of metrics. Such an observability dependency occurs if a metric used by SON function A (as input) is affected by one or several other SON functions aiming at optimising other metrics, but these other SON functions thereby influence the input metric of SON function A. This influence may cause SON function A to react misleadingly. A specific subclass of this type of SON function interaction is when different SON functions are tightly related in their performance objective and therefore need to be jointly optimised.

It is worth noting that the coupling and the conflicts between SON functions depend on various factors, for example, the choice of control parameters, the time-scale at which individual SON functions operate, the target and objective of the SON functions, input measurements, etc. As such these factors will have an impact on the coupling and dependency between the SON functions. The SON functions can either be independently designed and subsequently coordinated (if there is a need to do so), or they can be designed from the beginning such that the coupling between the SON functions is minimised (e.g., by considering the functionalities that aim at similar targets in the same SON function). As such, a careful design of the SON functions may render no coupling or dependencies at all.

Policy Levels

Network operators will have targets regarding the type of performance to be offered by their network. This can be described through high-level performance objectives. The operator may want to configure each individual SON function to achieve the desired network performance, and indeed, this is a viable option when SON functions are simple and few in numbers. But it is also possible, and perhaps essential as SON grows, to automate the mapping and distribution of high-level performance objectives to low-level policies specific to each SON function. This reduces the operator’s configuration effort as the input

SON Coordinator

SON Function C

OperatorHigh-level

performance targets

Network Subsystem

SON Function BSON

Function A

SON Function FSON

Function ESON Function D

Page 41: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 41 (135)

is to be defined once instead for each SON function. Furthermore this simplifies the description of the required performance as a technical low-level policy description is not needed, and the goals of the SON functions can be aligned, reducing the risk for conflicts. Therefore, a distinction is made between policies3 at the operator level, the SON function level, and the SON Coordinator level.

At the operator level, the Operator Policy formulates the high-level performance objectives such as targets for cell edge performance, cell average performance, fairness between users, and minimum user performance. A simplified example could be to maximise capacity while satisfying some minimum constraints regarding coverage and service quality, expressed in terms of, e.g., user throughput.

The Operator Policies are translated into SON Function-specific Policies. These policies govern the decision logic within the SON functions themselves, in the way performance and other observations are processed into control actions, typically in the form of a requested control parameter change. One example is a handover optimisation SON function trying to minimise a weighted average of the radio link failure probability and the ping-pong handover ratio. SON Function-specific Policies may also include instructions on how the SON functions report their functioning towards the operator.

Finally, SON Coordinator-specific Policies affect the behaviour of the SON Coordinator functions, for instance, operator-specified limitations on allowed control parameter ranges, maximum step sizes at which control parameter changes are allowed, or adjustment frequencies, i.e., the periodicity at which control parameter changes are allowed to be performed, or the SON functions are allowed to request changes. Further, there are definitions of what types of undesired behaviour should be detected and reported. The SON Coordinator-specific Policies thereby define how the coordination of SON functions, particularly the conflict resolution, is performed. SON Coordinator-specific Policies also include instructions on how the SON Coordinator functions report their functioning, and the overall performance of the SON system according to the high-level performance objectives, towards the operator.

Harmonisation Approaches

A key objective of the SON Coordinator is to ensure that the control parameter changes of the SON system are harmonised towards the Operator policy, expressing the applicable performance objectives. In achieving this harmonisation goal, two different types of harmonisation are introduced. It is distinguished between heading harmonisation, which aims at conflict avoidance, and tailing harmonisation, which aims at conflict resolution.

Heading harmonisation can be achieved by appropriate alignment of the SON function-specific policies, i.e., ensuring that the policies assigned to the different SON functions in a cell are not in conflict, such that the SON functions do ideally not produce conflicting parameter changes. This may be achieved by working towards non-conflicting targets and sharing applied performance metrics. The degree of heading harmonisation that can be achieved, i.e., the extent to which conflicts can be foreseen or prevented, depends on the number of implemented SON functions, the risk for conflicts between the SON functions (e.g., if different SON functions operate on an orthogonal or entwined set of control parameters), and the dynamics of the network system and the SON functions.

Tailing harmonisation aims at resolving conflicts that may occur in case heading harmonisation and thus conflict avoidance could not prevent all interdependencies between SON functions. The output parameter change requests of each SON function are checked to see if there is conflict with the objectives and control parameter requests from other SON functions. If required, one or several requests are changed or combined.

Heading and tailing harmonisation complementary interact with each other. This can be expressed such that, the more harmonisation is performed through the policies, the less harmonisation is required through conflict resolution.

4.2.2 SON Coordinator Functions

In the following a SON Coordinator (see Figure 18) with different functional roles is introduced, to achieve the harmonisation of SON functions as described above:

the Policy function, which converts the operator’s high-level performance objectives into SON function specific policies and provides the interface between operator and the SON system

the Alignment function, which is responsible for conflict detection and resolution

3 The term ‘policy’ is used differently in this paper as in other publications in the area of information technology, for

example [16]

Page 42: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 42 (135)

the Guard function that detects extreme or undesirable network behaviour and triggers countermeasures

the Autognostics function that collects and processes performance, fault and configuration data as input to the SON system

Figure 18: SON Coordinator functional roles and their interaction with operator, network, and SON functions

Policy Function

Each SON function has a number of policy input requirements, so the first domain of the Policy function is to convert the high-level objectives into the SON function policies. It is necessary for the policy inputs to the SON functions to be harmonised, such that SON functions can contribute in a controlled manner to achieving the operator’s high-level objectives.

The second domain of the Policy Function includes the setting of constraints regarding the operation of the SON system, for example, in terms of allowed parameter ranges and maximum configuration step sizes in case of conflicts to allow the operator to limit the SON system’s freedom. Furthermore, these settings may include instructions for the observation of the SON system to detect undesirable behaviour. Therefore, the Policy function transforms the high-level performance objectives into detailed SON Coordinator function specific policies, e.g., as input to the Alignment function or the Guard function (see below). This domain of the Policy Function also includes the definition and initiation of feedback regarding network and SON system performance towards the operator, by transforming the high-level performance objectives to a set of performance indicators that it requests the different parts of the SON system to deliver periodically and/or event based to the operator through performance reports.

It is obvious that the transformation of high-level performance requirements into SON (Coordinator) function specific policies is not straightforward but requires a detailed understanding of the network and operational experience. It is likely that an operator introduces SON functions and the SON coordinator stepwise, with a decreasing degree of manual approval of SON actions and increasing degree of detail of the policies.

Alignment Function

The Alignment function as a central part of SON coordination is responsible for resolving conflicts among SON functions as complement to the Policy function, for enforcing countermeasures in case of unwanted or extreme network behaviour that has been detected by the Guard function (see below), and for the management of SON functions. These tasks are maintained by the Arbitration and Activation sub-functions

Arbitration is responsible for the detection and resolution of conflicting requests (see Section 4.2.1). Control parameter change requests may be evaluated based on the expected effect. This can be achieved either by information from the SON functions, which give a fuzzy prediction of the expected performance effects along with the change request, or by a corresponding Arbitration internal mechanism that predicts the effects of the requested change on the system performance, by matching the affected control parameters with related metrics. Arbitration holds this prediction against the corresponding policies and takes a decision. Decisions can be, e.g., to prefer one request over the other, unify both requests into one, or to reject both requests. Both the fact that there has been a conflict and the result of Arbitration are communicated to the requesting SON functions through feedback. Optionally, the conflict itself may also be communicated.

THE OPERATOR

NETWORK SUBSYSTEM

AutognosticsFunction

Policy Function

SON System

Guard Function

Alignment Function

Communi-cation with

Peers

SON Functions

Page 43: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 43 (135)

Activation is responsible for the analysis and execution of (de-) activation requests. For example, in case a self-healing or self-configuration function shall be started (e.g., a cell outage has been detected, or a new NE is to be installed), a request is sent to Activation, which subsequently triggers the corresponding self-healing or self-configuration function to start, and simultaneously stops all SON functions that are known to cause conflicts with this self-healing or self-configuration function. In case the self-healing or self-configuration function to be started has only a local impact, i.e., only a known set of cells or network elements will be affected, Activation may only deactivate the conflicting SON functions within the affected area. As soon as the self-healing or self-configuration has finished (which is indicated through the SON function), Activation restarts the previously stopped SON functions.

Activation is furthermore responsible for the cause analysis of undesirable or extreme behaviour that has been detected by the Guard function. Activation analyses context information provided with the trigger coming from the Guard function, for example, the type of undesirable behaviour, which metrics are affected, and the time when the problem has occurred. Activation matches the affected metrics to related control parameters (a mechanism similar to the one used by Arbitration) and checks if these parameters have recently been influenced by SON functions. If this is the case, Activation may either trigger Arbitration to modify (e.g., cancel, set to pre-defined ‘safe’ value) any control parameter changes, completely stop (a set of) SON functions, or change the settings of the SON functions that are likely to have caused the undesired behaviour. For example, Activation may reduce the allowed step size of control parameter changes, the frequency of control parameter changes, or the range of within the SON function is allowed to modify control parameters. These countermeasures remain active until the Guard function sends a trigger that the undesired behaviour has disappeared or improved. Activation may reverse the countermeasures stepwise to prevent the undesired behaviour to reappear.

Guard Function

Complex SON functionality may lead to unexpected and undesirable network performance and behaviour. The module to detect this and initiate countermeasures is the Guard function. It is important to understand that the role of the Guard function is to detect extreme, at least far suboptimal behaviour, but Guard function actions shall not be triggered by fluctuations in subscriber behaviour.

The Guard function has a high independence from other SON activities. The operation of the Guard function is only steered through the policies of the Policy function, and thereby only follows the directives of the operator. The Guard function does not need an understanding of the SON functions, and it does not receive commands from the Alignment function. It has no involvement in assisting SON features achieving objectives.

The Guard function requests the Autognostics function (see below) to deliver data that it requires to detect undesirable behaviour. Types of undesirable behaviour are oscillations, for example, network KPIs oscillating by more than certain amplitude over a defined period, or a parameter value oscillating by more than a threshold over a defined period, and behaviour related to absolute performance, for example, unexpected KPI combinations (e.g., high RACH rate and low carried traffic), or network KPIs below or above an extreme threshold (beyond the normal experienced range). In case such behaviour has been detected, the Guard function triggers the Alignment function (Activation) to take countermeasures. This trigger includes context information about the affected metrics (e.g., the type of undesirable behaviour, which metrics are affected) and the time when the problem has occurred.

The Guard function may also request additional data from Autognostics to verify and observe the status of the undesirable behaviour. Note that the Guard function may not be able to distinguish between SON system-caused and configuration management (manually)-caused undesirable behaviour. It is the task of the Alignment function to conduct the cause analysis.

Autognostics Function

The Autognostics function is the source of configuration, performance and fault data to SON functions and SON coordinator functions (e.g., the Guard function or Autognostics functions on other network layers). It receives data requests from these functions, and then obtains the data from necessary sources, filters, analyses, correlates and processes the data according to the requests, and forwards the results. The types of data to be obtained can be performance, fault and configuration data, sourcing from NEs, UEs, and the OSS. Depending on the source, the data may include raw measurements or counters, KPIs, historic performance data, or configuration change notifications.

The role of Autognostics is thereby to provide information, not to take action. Having a common independent unit for collecting and processing the measurement data at the cell rather than at SON function level has three advantages. First, it is more efficient to perform the calculation only once in the Autognostics function rather than having each SON function repeat such calculations. Second, this

Page 44: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 44 (135)

ensures that each SON function bases its decisions on a common data set. And third, it is possible to detect and filter out unreliable data and to determine the statistical accuracy of the estimates.

4.2.3 Case study: Integration of Handover Optimisation and Load Balancing

In this case study the SON Coordinator presented in Section 4.2 is applied to two SON functions, namely, HO optimisation and LB. Both of the algorithms adjust HO related parameters to maximise the HO performance and to equalise the load distribution among the cells, respectively. The detailed description of the integration use case on HO optimisation and LB can be found in Section 9.1.

Definitions

Let us recall the event initiating a HO and how the two SON functions interact with each other. In LTE the initiation of the HO is based on the fulfilment of event A3 [15] (which is for the sake of presentation simplified as):

{1}

where Mn and Ms are the measurement results of RSRP and RSRQ of the neighbouring and serving cells, correspondingly, Ocn and Ocs are the cell specific offsets of the neighbouring and serving cells, and Hys is the hysteresis parameter. Hys is adjusted by HO optimisation, whereas Ocn and Ocs are adjusted by LB. When {1} is true, Event A3 is triggered.

The HO optimisation function as described in Section 3.2.3 adjusts time-to-trigger (TTT) and Hys to minimise the weighted sum of call drop ratio (at handover), handover failure ratio, and ping-pong handover ratio. In the following we assume that the weight of call drop ratio is 2, HO failure ratio is 1, and HO ping-pong ratio is 0.5. As such we consider reducing call drops more important than HO failure ratio and ping-pong ratio. The algorithm reacts by continually adjusting HO parameters Hys and TTT as a pair. This is done by searching around the initial values until optimum values are found. The HO optimisation algorithm tunes Hys and TTT in a time span of tens of minutes. For more details see [18]. Note that the TTT is not considered in the following.

The LB function adjusts Ocn and/or Ocs to vary the cell borders and thus the load carried in each cell. This action is triggered by the load difference between neighbouring cells. If a cell is overloaded then the cell specific offset, e.g., Ocn is increased and Ocs is decreased, thereby moving some users from the serving cell to the neighbouring cell. The value of Ocs and Ocn depend, among other factors, on the load difference between the cells. The LB algorithm tunes the offset in a time span of seconds. For more details see [19].

Assessment of Interaction between LB and HO Optimisation

Considering the above, clearly the HO optimisation is expected to interact with LB, because their parameters, Hys and Ocn both affect the A3 event equation {1}. As it will be shown the two SON functions can operate in parallel, however, the use of LB or HO optimisation will reduce the desired effects of the other function. Using the SON Coordinator increases the overall performance. In the following four different cases are illustrated where: (1) a scenario without SON solutions is evaluated, called the reference scenario below, (2) a scenario with separated LB and HO optimisation functions is evaluated, (3) a scenario with LB and HO optimisation functions running simultaneously is evaluated, but without coordination, and (4) a scenario with parallelised LB and HO optimisation functions is evaluated including a coordination strategy. The rationale behind the proposed assessment is that comparing the outcome of these different phases enables the quantitative assessment of the gains from self-optimisation in pure sense, as well as the gains from adequate coordination.

Figure 19 shows the different assessment scenarios described above. Note that the functional layout shown in the figure does not imply the actual location where the HO optimisation and LB algorithms, and the SON Coordinator functions are actually implemented. This can be either at the eNodeB itself, but also at domain management or network management layer. Furthermore note that for Scenario (4) with SON coordination, a simplified version of the SON Coordinator has been used, concentrating on the functionality of the Alignment function. The Guard function has not been implemented at all as its functionality was not relevant for the assessment. The functionalities of the Autognostics function have been implemented directly in the HO optimisation and LB algorithms to reduce the required implementation effort. The policies required for the functionality of Alignment have been directly implemented into the function.

Page 45: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 45 (135)

Figure 19: Scenarios used for comparison of coordination approaches

The algorithms have been studied in a dynamic system-level simulator modelling an LTE downlink system of 10MHz bandwidth following the simulation assumptions in 3GPP [17]. A simulation setup with 19 sites in a regular hexagonal grid, 3 sectors per site and 57 cells has been employed. Users are dropped equally distributed over the network area for a background load of the system (less than 100% on average) and additionally users are dropped in a concentration area which is moved through the network for a local overload of the system. All users follow a constant velocity and random waypoint model.

Results of Case Study

The outcome of the above mentioned scenarios is given in Figure 20, where a summary of the different high-level performance indicators (HPI)s are shown over time. Note, the reference case represents a non-optimised network where default parameters are used throughout all cells. As we can see, the HO optimisation algorithm improves the HPIs compared to the reference scenario, since the call drop ratio (which we prioritise according to above) is significantly reduced. The LB algorithm reduces the number of unsatisfied users due to load imbalance between neighbouring cells, however, the HO performance is degraded as the call drop ratio, HO ping-pong ratio, and HO failure ratio are increased compared to the reference scenario. Executing the two SON functions in parallel (and with no coordination) results in LB cancelling the positive effects of HO optimisation as the HO performance reaches similar level for call drop ratio and HO failure ratio as the case without HO optimisation. LB is able to change HO offsets few times between each action actuated by the HO optimisation. In some cases, due to a particular load imbalance, the LB algorithm adjusts the offset and, therefore, the HO event (A3) in an opposite direction as it would be in favour of the HO performance, which results in a poor HO performance.

Figure 20: Overall performance summary

eNodeB

Mea

sure

men

ts

Configuration A

ctions

Manual Analysis& Configuration

Changes

(1) no SON

eNB

Mea

sure

men

ts

Configuration A

ctions

(2) Separate LB & HO opt.

HO opt.

eNBM

easu

rem

ents

Configuration A

ctions

LB

Mea

sure

men

ts

Configuration A

ctions

(3) Simultaneous LB & HO opt.

HO opt.

eNB

Mea

sure

men

ts

Configuration A

ctions

LB

(4) LB & HO opt. with SON Coordinator

HO opt.

eNB

LB

Alignment

Autognostics

Mea

sure

men

ts Policy

Configuration Actions

0

6

12

18

24

30

NO SON HO Opt LB HO Opt & LB HO Opt & LB &SON Coordinator

PE

RF

OR

MA

NC

E

Average number of unsatisfied users

Call drop ratio (%)

HO ping-pong ratio (%)

HO failure ratio (%)

Page 46: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 46 (135)

The Alignment function has been introduced in Scenario (4) to enhance the joint performance of the two SON functions. The Alignment function in this case prevents HO optimisation from adjusting Hys in case a cell has previously been often overloaded. By using the SON Coordinator functionality it is possible to combine the benefits of both algorithms, as shown in Figure 20. The number of unsatisfied users and the HO ping-pong ratio are slightly increased, however, the call drop ratio is significantly reduced compared to the uncoordinated case.

In conclusion, it can be seen that there is a coupling between the LB and HO optimisation function. This results in poor performance when executing the two SON functions without any coordination. A simple Alignment function, which suspends the actions by the HO optimisation function, can increase the overall performance of the two SON functions. In particular it can be seen that the negative effects of the LB on HO performance can be reduced. It is worth noting that the coupling between LB and HO optimisation exists in this case due to the choice of the algorithms and control parameters in the respective SON function. If load is balanced using a different approach, e.g., a different control parameter, then the coupling may not exist at all and there may not be a need of a coordinator in this case.

4.3 Discussion and Open Issues

The need or benefit of SON coordination depends on various properties of the SON functions in operation, where a careful design of the SON functions may result in no coupling or dependencies at all, and as such, the need of coordination may be obsolete. The degree of coupling observed between SON functions may also be contingent on the vendor implementation depending on the layer where the corresponding SON functions are implemented. For example, if two SON functions that have interdependencies are implemented at NE level the solution of this coupling will be vendor dependent. In case two SON functions that have interdependencies have impact across vendor domains a coordination between them will be necessary.

For example, a case study carried out within the SOCRATES project with the goal to understand the dependency between HO optimisation and admission control optimisation has found no or negligible coupling between the two SON functions. On the other hand, the results of the case study on LB and HO optimisation presented in Section 4.2.3 show that there exists a coupling and that using a coordinator can considerably enhance the performance of the two integrated SON functions. This, of course, depends on the actual choice of targets and control parameters of the SON functions and the degree of coupling may differ in a constellation where other mechanisms for achieving the LB and HO optimisation targets are used. Considering the sheer number of options in designing SON functions it is difficult to generally foresee the need for or gain of coordination.

An important aspect regarding the introduction of a SON Coordinator is its required implementation effort and complexity to get the system running, compared with the reduced effort and complexity for maintaining and aligning individual SON functions. As noted above and in Section 4.2.1, there are several factors that influence this ratio, and hence the business case for a SON Coordinator. Finding the point from where the implementation of a SON Coordinator becomes more valuable than a “manual” coordination of the individual SON functions requires a good understanding of the respective operator’s processes, objectives, and business challenges.

There are some open issues to be solved regarding the introduction of a SON Coordinator. Primarily, the conversion of the operator’s high-level performance objectives into SON (Coordinator) function specific policies still requires conceptual work. Operators have a clear opinion on what their high-level or business level performance objectives are, and 3GPP standardisation already started the definition of policies for individual SON functions, but there is a gap between these two parts of the “policy continuum” (see [16]) which needs to be filled. Potential solutions could be to establish knowledge management or self-learning systems, to enable an automated conversion from the high-level to the detailed technical policies.

Furthermore, while the implementation of SON functions into LTE networks evolves, it becomes clear that it will not be possible to simply convert the current manual operation processes into SON operation processes. While in manual operation, performance and fault analysis is carried out by the human operator based on individual experience and knowledge regarding the configuration changes to be made, and their impact on system performance, such an approach will not work with automation. Advanced studies and simulations of the network behaviour with running SON functions are required, and this also applies in case SON coordination is implemented.

In case a SON system is implemented and a selection of possible adjustments is presented to the operator, i.e., the high-level policies, the operator will be tempting to set them all to unrealistically high levels, for example, 99.9% coverage, 0.01% drop call rate, or cell edge throughput 15Mbps. Some of these may be

Page 47: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 47 (135)

achievable individually, but normally not simultaneously. While the SON coordinator can do its best to achieve these targets, it will have to identify where these targets cannot be achieved and inform the operator. This could trigger a SON function to recommend where and what actions are to be taken to satisfy the high-level performance objectives. In general, consideration is required to how the operator is provided with feedback on the tension between objectives, and how to set a proper combination of objectives in the operator policy.

4.4 Conclusions on Integrated SON Functions

As it is already outlined in the discussion on SON coordination in Section 4.2.3 it is quite difficult in the current stage to determine at which degree a SON Coordinator is helpful and necessary. This clearly depends on the number and type of different SON functions implemented, on the interrelations between these SON functions, furthermore on the dynamicity of the network and its layout, and finally on the SON preferences and setup of the individual operator. As long as very few SON functions are implemented the effort required to introduce and set up a SON Coordinator may be in disproportion to the actual gains achieved, at least compared with a manual coordination between the individual SON functions by setting their threshold, allowed configuration change step sizes etc.

The determination of this break-even border is definitely an issue of further research. A first step has been made within the SOCRATES project, apart from the SON Coordinator activity, with the integration use cases (see Chapter 9). As part of SOCRATES Activity 3.2, several pairs of stand-alone SON use cases have been analysed, with the result that for some of them no or few interaction regarding control parameters can be observed (namely Admission Control and Handover Parameter Optimisation, see Section 9.3), while Load Balancing and Handover Parameter Optimisation show a considerable interaction as it can also be seen from the results of the case study in Section 4.2.3. For the remaining integration use cases considered within SOCRATES, Home and Macro eNodeB Handover optimisation (see Section 9.2) there has been no numerical analysis made regarding the potential gain of using a SON Coordinator.

Taking also into account that the interrelation between SON functions of the different SON areas self-optimisation, self-configuration and self-healing still needs analysis work (e.g., regarding the prioritisation of self-configuration or self-healing tasks versus self-optimisation tasks), and furthermore a much higher number of SON functions may considerably increase the complexity of the interdependencies between single SON function, it becomes clear that the whole area of SON coordination or SON management still requires research work.

Page 48: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 48 (135)

5 Implications of the Introduction of SON Changes triggered by SON are manifold. Most obvious is the influence on the radio network planning, optimisation and operations process by replacing parts of this process with SON functionalities. SON functionalities may also require new interfaces, network elements or new functionalities of network elements having an impact on the network architecture.

In the following sections, the impact of the introduction of SON is considered for various aspects. First, in Section 5.1, the impact on radio network planning and operations is considered. Subsequent sections consider the impact on OPEX and CAPEX (Section 5.2), network architecture (Section 5.3), standardisation (Section 5.4) and regulation (5.5). The chapter ends with some concluding remarks (Section 5.6).

The focus is on the implications of SON functionality as envisioned by the SOCRATES project. Note that the impact of SON on network performance is also very important, but as this topic is extensively considered in Chapters 3 and 0, it is not considered in this chapter.

5.1 Impact on Radio Network Planning and Operations

In Section 1.2, the various stages of mobile network operations were considered. SON is expected to have a significant impact on the radio network planning and operations processes. Many previously manual processes will become automated. Traditionally, various tools are used for network planning and operations. Introducing SON is likely to diminish the importance of these tools. In some cases it may be beneficial to continue using tools. In the following sections, the impact of SON on tool and processes relating to radio network planning and operations processes will be considered. In particular, the impact that the SOCRATES results are likely to have on these processes will be considered. Four specific aspects are considered in this section:

Radio network planning (Section 5.1.1)

Radio network optimisation (Section 5.1.2)

Cell outage management (Section 5.1.3)

Operator policies and control (Section 5.1.4)

5.1.1 Radio Network Planning

Network planning traditionally includes tasks such as selecting base station locations, setting tilt and azimuth values, determining the number of sectors and maximum transmit power, and applying default parameter values. It is obvious that certain aspects of network planning such as selecting base station locations for initial roll-out will still be required even when SON functionality is applied.

However, some tasks, such as configuration of tilt values, can be automated, as illustrated by the ‘Automatic Generation of Initial Parameters for eNodeB Insertion’ (AGP) use case, see Section 8.8. This removes the need to manually plan tilt values. However, this functionality does require Remote Electrical Tilt (RET) antennas to be available at the base station.

The parameter values will still require a start value. For most parameters, default values will be available, and these can be used as start values. Due to the potential of SON to quickly adapt parameters, it may be possible to use the same start values for all base station types.

Another impact of SON is that it may reduce the need to find the best possible locations for base stations. In general, networks are planned in such a way that minimum optimisation will be needed to obtain good network performance. With SON, this is not so important, so even a base station in a non-ideal location can achieve good performance.

There will also be an indirect use of the SON functionality in the network. The intelligence associated with SON can also be used to come up with suggestions for locations for new base stations, and identify the need for new base stations. Obviously, there will be a difference between initial network roll-out and network expansion. For initial network roll-out, there will be no SON data to use. However, for network expansion, SON data from the already deployed base stations can be used to identify the need for new base stations. However, it will still be necessary for a human operator to make the final decision on the base station location, as there are many non-technical criteria involved in decisions relating to new base stations.

In addition, data collected from existing base stations could potentially also be used for planning of parameters of new base stations

Page 49: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 49 (135)

Overall, it is concluded that SON has the potential to remove the need for all planning, except for the planning of base station locations. However, SON data can in some cases help with the planning of new base station locations.

5.1.2 Radio Network Optimisation

The impact that SON will have on an operator’s radio network optimisation processes will depend on what the current processes are. Some operators do more optimisation than others, see Section 2.3. For operators that do a lot of optimisation, SON will more likely replace manual processes. For operators that do not do a lot of optimisation, SON will improve network performance compared to the current situation. In general terms, the impact of SON on radio network optimisation will be considered in this section.

Without SON, optimising parameters for each cell individually would be highly time-consuming, but with SON, it becomes feasible in practice to optimise parameters for each cell individually. The handover optimisation work (see Section 8.2) has illustrated the potential of optimising the parameters of individual cells, and has shown that this results in improved network performance. Before the SON algorithm is activitated, all cells have the same default handover parameters. The SON algorithm will then adapt the handover parameters within each cell individually, resulting in improved handover performance.

Another example of optimising parameters in individual cells is the admission control parameter optimisation use case, see Section 8.1. The best settings for each cell are dependent on the properties of the traffic within that cell, and gains can be achieved by optimising individual cells.

However, optimising parameters for each cell will not always result in gains. An example of this is the packet scheduler parameter optimisation use case (Section 8.2). Optimising scheduler parameters for each cell using SON was found to have minimal gains. A good scheduler will already optimally schedule various types of traffic using a default configuration that can be used for all cells.

Therefore, when considering SON for radio network optimisation, the benefit of cell individual optimisation needs to be analysed before implementing such SON functions. In some cases there will be significant benefits, but not always.

Another aspect of SON is that it becomes possible to optimise parameters on a shorter timescale. This enables the network to respond to conditions that vary over shorter periods. The load balancing work (see Section 8.4) has illustrated the potential of adapting parameters on a short timescale, and has shown that performance gains can be achieved.

An important part of radio network optimisation is using drive tests to obtain measurements that are used as input to the optimisation. For SON purposes, it should be possible to obtain UE measurements automatically, so that the optimisation can be performed without the need for drive tests. To be able to replace drive tests with UE measurements requires that UE measurements can be localised. The X-map estimation activity (see Section 8.7) has shown that this is possible, and that coverage maps can be generated from UE measurements.

In addition to optimisation of macro cell based networks, SON is also required for Home eNodeBs (HeNBs). As home base stations in general are a fairly new concept, and 3G home base stations deployments have only recently started, it is difficult to consider the impact of SON on the radio network optimisation of HeNBs. However, SON functionality is already included in 3G home base stations, and it will also be applied for Home eNodeBs. Work in SOCRATES has considered SON functionality relating to interference reduction and handover optimisation for HeNBs (see section 8.5). This has shown the potential of fully autonomous optimisation of HeNBs, but has also highlighted some challenges.

As highlighted at the start of this section, the impact of SON on radio network optimisation will depend on the current operator approach. Often, operators will focus on addressing the performance of cells which have particularly bad performance. If the appropriate SON functionality is available, the issues that are causing this bad performance will be autonomously resolved. Overall, it is concluded that the SOCRATES work has shown that SON functionality can significantly contribute to radio network optimisation.

5.1.3 Cell Outage Management

When there is a cell outage, this will have an impact on the UEs within the coverage of that cell. As it may take some time to resolve the cell outage, automatic compensation for the cell outage is an appealing concept. Work on cell outage compensation in SOCRATES has shown that it is possible to apply automatic compensation (see Section 8.6). This results in an increase in the number of served users, but also a reduction in the quality that the users are experiencing. There is obviously a trade-off, and an operator policy is required to decide what is most important.

Page 50: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 50 (135)

To be able to apply automatic cell outage compensation based on tilt, Remote Electrical Tilt (RET) is required.

The SOCRATES work has shown that cell outage compensation does work, but that it does not fully compensate for the cell in outage, and that it will still be necessary for an operator to resolve a cell outage as quickly as possible.

5.1.4 Operator Policies and Control

SON functionality will result in automation in the network. However, that does not mean that there will be no interaction with the operator. There are two types of interaction that will be required. Firstly, the operator will have to define what they want from the SON functionality, i.e., what should the SON functionality do to improve the network performance. This is referred to as operator policy. The second aspect is that the operator will want to monitor the SON functionality, and be able to exert some control over what it is doing. This is referred to as operator control. These two aspects will be addressed in this section.

Operator Policies

Operator policy is important as there are many trade-offs in network optimisation. For example, for handover optimisation, there may be a trade-off between radio link failure ratio and ping-pong handover ratio. The operator will need to decide which metrics are the most important to optimise, and which are less important. An operator interface will be required to specify the operator policy. Work in SOCRATES has considered two approaches to deal with trade-offs between metrics (see Section 2.3 for information on metrics).

The first approach is to define an overall metric that is a weighted sum of individual metrics. If smaller values equate to better performance for the individual metrics, then the aim is to minimise the weighted sum. An example of this is the handover optimisation use case (see Section 8.2). For that use case, the overall metric was the weighted sum of the radio link failure ratio, handover failure ratio and ping-pong handover ratio. It will be important for the operator to determine the appropriate weighting factors to suit the operator requirements. This in itself is not a straightforward task, as putting a number on the relative importance of different metrics is not easy. However, given specific weighting factors, the results from the handover optimisation use case showed that using a weighted sum results is an effective optimisation approach, that allows the operator to specify the relative importance of different metrics.

The second approach to handling various metrics is to determine a priority order for the different metrics, and ensuring that the targets for the higher priority metrics are met first, before aiming to meet the targets for the lower priority metrics. An example of this is the admission control parameter optimisation use case (see Section 8.1). For example, accepting handover requests has higher priority than accepting new calls. Therefore, keeping the handover failure ratio metric below the target value is considered first. If that target value is met, then reducing the call blocking ratio is also considered.

The approaches described in the above paragraphs are examples of SON function policies. This means that they are policies that are applied to a specific SON function. To avoid the need for the operator to specify individual policies for each SON function, ideally it should be possible to define high-level policies. High-level policies specify overall network performance, while the SON function policies specify the requirements for the individual SON functions. Ideally, it should be possible for the operator to define a high-level policy, which would automatically be translated into detailed policies for individual use cases, as described for the Policy Function of the SON Coordinator (see Section 4.2). Developing such an automatic translation from high-level to SON functions policies would be highly challenging. However, it would make the task of specifying policies significantly easier for the operator. In addition, it would ensure that individual SON functions avoid policies that conflict with each other.

In situations where there are multiple SON functions, policies may also be required to manage conflicts between use cases (as opposed to conflicts between policies). For example, there is a control parameter conflict between load balancing and handover optimisation (see Sections 9.1 and 4.2.3). To enable the SON Coordinator to decide how to manage that, a policy is required. For example, the operator may want to specify which of the two SON functions should be given priority. Such a policy would be used as an input to the Alignment Function of the SON Coordinator. The approach described in Section 4.2.3 prevents the handover optimisation function from adapting Hysteresis when a cell is often overloaded. Therefore, the algorithm is implicitly giving a higher priority to the load balancing function.

Operator Control

Operator control is important as an operator may not trust the SON functionality to perform as expected. That results in a monitoring requirement, which means the operator should be able to see how well the

Page 51: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 51 (135)

SON functionality is performing. In addition, the operator must also be able to take action if there is an issue with the SON functionality. For example, switching off the SON functionality, or changing the SON configuration. Within the SON Coordinator in SOCRATES, there is a Guard Function (see Section 4.2). The purpose of this guard function is to detect unexpected behaviour, and therefore reduce the need for operator control.

5.2 Impact on OPEX and CAPEX

Putting a quantative value on the benefits of SON is a difficult task. If we look from a qualitative point of view there are two important gains from SON:

OPEX savings

CAPEX reduction

5.2.1 OPEX Savings – Rough Estimation

First, if we look at a mobile operator, what is the total of expenditures he has to make to operate the network? In several, public, presentations consultancy firms presented their findings:

Yankee Group Research [55] looks at the network operations cost (next to: general & admin, customer care, marketing & sales, interconnection & roaming and terminals & material) in the OPEX breakdown. This varies between 30% (China) and 20% (Latin America). Of this network operations costs, 52% is used for site backhaul and leases, 29% for technical personnel, 13% for field maintenance product support and 6% is other costs.

Analysys Mason [56] estimates the network costs as part of the total OPEX being 35%. The other categories are marketing, interconnection and other.

Pyramid Research [57] presents even the percentage of the network OPEX that is allocated to network optimisation. On average 35% of the operators spend less than 5%, around 50% spends 5-10%, 5% spends 10-20% and 15% of the operators spend more than 20% of the network OPEX on network optimisation.

Combining these figures we conclude that roughly 30% of the OPEX is spent on the network. The part of this costs that can be saved by using SON depends on the type of operator but is not limited to the pure network optimisation costs. A rough estimation is that the network optimisation and operations OPEX is around 20% of networks OPEX. This means that 6% (20% of 30%) of the total OPEX is the fraction of OPEX for network optimisation and operations. In the case of Vodafone, this means 6% of an amount of 6,683 million pound (Vodafone Group OPEX from 2007), which equals 400 million pound, which is equivalent to 465 million Euro.

The next step is to consider OPEX reductions for LTE as a result of SON. The above figure of 465 million Euro applies to 2G and 3G. If it is assumed the LTE OPEX will be 50% of that for 2G and 3G, the LTE OPEX for network optimisation and operations will be 232.5 million Euro. Ideally, SON would reduce this value to zero. However, in practice that is unlikely to be possible. If a conservative estimate of a 20% reduction in OPEX as a result of SON is applied, then the OPEX saving is 20% × 232.5 million Euro = 46.5 million Euro.

OPEX savings – Example of a detailed analysis

The automation of manual tasks enabled by SON results in an OPEX saving. However, quantifying the OPEX saving due to the introduced SON functionality is not straightforward. In Socrates deliverable 2.3, see [49], Section 2.1.6, a possible approach is presented for quantifying the OPEX savings. First, the method proposes to quantify the manual effort for the case that SON is not used, consisting of three steps:

Step A: Obtaining input data for problem solving

Step B: Determine new parameter settings

Step C: Apply the new parameter settings

Second, the method proposes to quantify the manual effort reduction in Step A to Step C when SON functions are used. This will define then the overall OPEX reduction.

In the following we present one example how to calculate Step A to Step C for a typical use case of handover optimisation. Note here that the following example is pure for illustration purpose, i.e., to show how the OPEX calculation can be done. The reader should not understand the presented effort values as ‘representative’ or ‘an industry norm’ for this kind of manual optimisations. The manual effort invested in

Page 52: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 52 (135)

e.g., handover optimisation depends on, among others, the amount and skills of the personnel, availability of troubleshooting and analysis tools/aids, complexity of the problem at hand, operator’s quality strategy, etc.

Step A – Problem Identification (Estimated Effort: 5 work days, 1 work day = 8 hours)

In this phase the operator identifies the problem, e.g., unacceptable number of call drops in handover regions, handover ping-pong, etc. The analysis can be based on performance counters collected from network nodes, special UE traces and if applicable drive tests. At the end of this stage the operator knows how severe is the problem and where it is located in the network.

Step B (Part I) – Parameter selection and optimisation (Estimated Effort: 15 work days)

In this phase the operator does expert analysis for identifying possible solutions, in terms of parameter adjustments, in order to solve the handover problem. As the end result the operator knows which parameters have to be changed, how these parameters should be changed (e.g., increase or decrease, how aggressive, granularity, etc), which network nodes should be altered with these parameter changes and how the trial period would be done (i.e., the trial plan). Note that the expert analysis could be aided by some simulation/analysis software (if present/applicable) or a trial network.

Step B (Part II) – Impact Analysis (Estimated Effort: 1.5 work day per trial week)

In this phase the operator does ‘delta-changes’ of the identified parameters in Phase2 (half working day) and analyses what is the impact on network performance from this parameter change. Again the operator can collect input data (network counters, specialized UE traces, drive tests, etc) during a sufficiently long time interval such as, e.g., a working week in order to filter out random incidents and only focus on structural improvements. At each week one day effort is needed to propose the follow-up ‘delta-change’. For the handover optimisation use-case it is assumed that 5 iterations will be needed resulting in a time span of 5 weeks.

Step C – Optimal setting selection & Network wide deployment (Estimated Effort: 2.5 work days)

In this phase the operator selects the optimal settings (effort: one day) based on the data from the trial weeks in Phase3, see above. Then, the optimal setting is introduced network wide (effort: half day) and relevant input data is collected for 1-2 weeks. The operator checks (effort: one day) if this network wide deployment gives structurally better handover performance compared with the situation before the optimisation. At the end of this phase the optimisation can be finalized and if structural improvements are measured also justified/approved.

The total Phase1 to Phase4 effort is then equal to 5 + 15 + 1.5x5 + 2.5 = 30 working days. Assuming work costs of 800 Euro per day this will amount to 24 000 Euro per optimisation cycle. Additionally, if an operator does the optimisation cycle at the end of each quarter, i.e., four times per year the annual costs for handover optimisation are equal to 96 000 Euro.

Ideally, by full automation of the manual effort in Steps A to C above, the total OPEX cost of 96 000 Euro can be completely replaced by self-optimising handover functionality. In practice, however, there might be still some manual effort needed for fine-tuning and monitoring of the self-optimising handover functionality.

Additionally, it is important to emphasize here the following important issues:

1) Although the self-organised handover optimisation (SO-HO) function might take as long as manual optimisation and eventually reach the same optimised parameter settings it should be noted that SO-HO can be constantly activated while manual optimisation has to be restarted each time. In fact an operator can decide to have handover optimisation once or twice per year but then for a fair comparison the OPEX should be estimated on basis on same amount of parameter adjustments (i.e., assuming both optimisations have same convergence time and same optimised solution).

2) The SO-HO function enables the operator to perform additional optimisation cycles without (or with insignificant) OPEX effort. This will not be possible with manual optimisation. Further, it enables the operator to scale-down the optimisation on a very small network area, ideally down to cell level, which is prohibitive from OPEX point of view if the operator utilizes manual optimisation.

3) Next to the handover optimisation there are many other optimisations that can be done in parallel. With manual optimisation this might not be possible and done in sequence and therefore possibly converging towards a sub-optimal global network setting. With parallel self-organising

Page 53: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 53 (135)

functionalities that are coordinated via SON coordination function (guided by the operator high-level policy) this kind of sub-optimal solution might be avoided.

5.2.2 CAPEX Reduction

The performance improvements from SON introduction can also result in CAPEX reduction. Three possibilities are envisaged that can be adopted by the network operator:

o The operator accepts the performance improvement with SON, i.e., the network will perform better while the operator still continues with his original CAPEX plans (i.e., no CAPEX reduction).

o The improvement gives the operator the possibility to postpone an upcoming CAPEX investment, i.e., the SON improvement results in a higher capacity. Postponing the investment is a CAPEX reduction.

o The improvement gives the operator the possibility to build the network with fewer sites. Here a direct CAPEX reduction is realized.

5.3 Impact on Network Architecture

SON will not change the fundamental network architecture of LTE. The key addition of SON will be that additional functionality will be added to various network entities, and some signalling will be required between these entities. This functionality can potentially be implemented in the eNodeB, the Element Manager (EM) - also called Domain Manager (DM) - and Network Manager (NM).

One of the key aspects of the architecture for SON is whether the algorithms are implemented centralised (at NM level) or distributed (at eNodeB level), or as a combination of centralised and distributed (hybrid).

A key factor is that the architecture of the SON solution should enable it to work in a multi-vendor environment.

The architecture will also be determined by the supplier of the SON functionality. Network vendors can implement SON functionality at eNodeB, DM, and NM level. SON functionality from third party companies will generally be implemented at NM level.

This section considers the impact on network architecture in three sections:

Individual SON functions (Section 5.3.1)

Integrated SON functions (Section 5.3.2)

SON Coordination (Section 5.3.3)

5.3.1 Individual SON Functions

In this section, the required architecture for each of the individual SON functions studied in SOCRATES will be considered.

Admission control and packet scheduling algorithms in LTE (non-SON) will be implemented at the eNodeB. As they work on a per-cell basis and take their decisions based on local measurements from the cell, a distributed implementation is usually applied. No large changes in this network architecture are expected due to the introduction of self-optimisation for admission control and packet scheduling algorithms. The self-optimised algorithm will work on a per-cell basis. It will again base its decisions mainly on local measurements from the cell.

The SON functionalities for handover optimisation, load balancing and interference coordination are all based on the existing architecture. Communication between eNodeBs will increase with the SON functionalities, but the existing X2 interface has been assumed.

The SON functionality for home eNodeBs is most likely to be distributed in the home eNodeB themselves. Not all performance measurements from the home eNodeBs will be reported to the central O&M system, as this would overload the system. The home eNodeB SON functionality will not affect the existing network architecture.

Most likely cell outage detection and compensation, as well as x-map estimation, will be executed in each base station and a distributed approach is foreseen. Each cell will as such take appropriate actions based on observations made in the own cell, possibly coordinated with the neighbouring cells. It is most likely, that the trigger of cell outage compensation will arrive from O&M, based on measurements or alarms identified. The cell outage management functions will not impact the network architecture.

Page 54: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 54 (135)

A key feature of self-configuration and self-optimisation is that the network configuration can be changed automatically within and by the network. As a result of this, new and elaborate concepts for feeding back the network configuration from the network into the network management and the network planning tools will become important. The extent to which this will require changes in the envisioned network will depend on the specific solutions to the feedback problem.

For the interfaces that are required (X2 and Itf-N), it is important that these interfaces are available and that the delay in transmitting signalling over these interfaces is small. It is not expected that this signalling will result in a significant overhead.

5.3.2 Integrated SON Functions

In general, similar considerations apply for integrated SON functions as do for the individual SON functions that are being integrated. The impact on the network architecture of integrated SON functions will be the combined impact of the SON functions that are being integrated.

5.3.3 SON Coordination

To give a general statement and recommendations on the architectural impact of the different SON Coordinator roles, the functional approach required at least an experimental implementation, together with a set of SON use cases. Therefore, in Table 3, the minimum impact of the functional roles on the architecture is described. As SON coordination in general is a rather new topic in standardisation, and it is currently unclear how far it will be required at all, since the number of SON use cases to be implemented is rather small in current releases, the impact described in Table 3 only reflects the current status in research. Future consolidated findings and technical expertise from first SON deployments may considerably change the requirements.

Table 3: SOCRATES SON Coordinator: minimum architectural impact by functional roles

Functional Role

Preferred Architecture

Reason

Policy Function Implemented at different layers (NM,DM,eNB)

The definition of high-level operator policies is required at network management level, while the enforcement of these policies is required at domain management or network element level

Autognostics Function

Implemented at different layers (NM,DM,eNB)

The Autognostics function collects and processes performance, fault and configuration data required by SON use cases, but also SON Coordinator functions. Depending on the level where SON use cases are implemented, it is likely that an Autognostics function is necessary at the same level

Guard Function Implemented at different layers (NM,DM,eNB)

Undesirable performance caused by SON functions may occur and therefore be detected at every level where SON functions are implemented. Undesirable performance may even be detectable only at the level where it occurs since the required performance information is only available there.

Alignment Function

Implemented at different layers (NM,DM,eNB)

A centralised approach (at network management level only) is reasonable if only a small number of SON functions is implemented and the number of configuration change requests that have to be processed is small. If many SON functions are implemented at several levels (network and domain management level, and network element level) a distributed implementation at least at network and domain management level is more appropriate to reduce the load caused by configuration change requests on the interfaces, but may cause considerable complexity in the coordination between several alignment functions.

Page 55: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 55 (135)

5.4 Impact on Standardisation

The impact on standardisation in the area of SON strongly depends on the SON functions implemented in the network. This impact can thereby consist of requirements on the measurements to be defined, requirements on the existing interfaces (e.g., Itf-N, X2, S1) regarding signalling, messages and message format, and the architecture of the network and the OAM system (e.g., regarding additional interfaces or elements to be added).

For SON functions that are implemented at the NE or EM level and that do not require information exchange (e.g., measurements, signalling) across vendor domains, no standardisation at least for this information is required. However, an operator may request the configuration settings for these SON functions to be similar across vendor domains, i.e., the high-level policies for these SON functions to be similar. In this case standardisation may be required, and corresponding activities in 3GPP have already started for some SON functions, for example, Mobility Load Balancing.

For SON functions that are implemented at the NE or EM level and that require information exchange across vendor domains, or for SON functions that are implemented at NM level with a network-wide impact, standardisation of the information to be exchanged across the vendor domains will be necessary to ensure multi-vendor capabilities. Also for these SON functions the operators are likely to request the configuration settings (policies) to be similar across vendor domains.

The potential standardisation impact of the stand-alone and integrated SON functions that have been handled within the SOCRATES project has been analysed in Deliverable D5.10 [58]. The outcome of this analysis shows that, at least for the SOCRATES SON functions, the standardised network and OAM architecture and the standardised interfaces already fulfill most of the requirements. Only for some SON functions an extension of the signalling over X2 and Itf-N interface will be required. The same applies for the measurements and KPIs as most of the handled SON functions assume a distributed implementation at NE level, and existing radio signalling and measurements fulfill the requirements.

However, considering that the SON functions being implemented in a real network may differ from the SOCRATES solutions regarding the type of implementation (centralised vs. distributed), and also the control parameters and measurements may be different, the SOCRATES project recommends to closely monitor standardisation and SON function development progress to ensure the multi-vendor capability of SON functions if required. Also aspects that have not been considered within the SOCRATES project, like the feedback of SON function operational success towards the operator, or SON function operation across different radio network types (3G and LTE networks, for example) may play an important role regarding standardisation impact.

5.5 Impact on Regulation

Regulatory matters deal mainly with all aspects related to the allocation frequencies. Hence the critical point where SON solutions may touch regulatory issues is the frequency allocation. Frequencies are allocated to mobile operators on a national basis. However close to border areas between different countries coordination is required. In order to ensure interference-free operation close to the border areas between different countries so-called cross-border coordination procedure have been established. Part of this cross-border coordination is the burden to the network operators to announce to the regulation authorities all relevant data of a cell site well in advance before switching on the site. This data includes all antenna configuration data (coordinates, antenna height, antenna type, direction of transmission in azimuth and elevation, transmission power, frequency). As at least a part of this data is subject to change by SON this effect has to be taken into account in the cross-border coordination procedure. Therefore the maximum transmitted power in each direction has to be coordinated. Due to potential interference from a sub-space of the whole parameter configuration space the range for parameter changes allowed by SON may be restricted.

5.6 Conclusions on Implications

One of the key implications of SON is the impact on radio network planning and operations. The key findings from SOCRATES on what the implications will be are:

Radio network planning: SON has the potential to remove the need for all planning, except for the planning of base station locations. However, SON data can in some cases help with the planning of new base station locations (e.g., suggestions for best locations when extension of operational network is needed).

Radio network optimisation: The SOCRATES results have shown that SON functionality can significantly contribute to radio network optimisation. It enables optimisations that would be

Page 56: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 56 (135)

impractical to do manually. However, the gains from SON functions should be carefully considered before they are implemented, as in some cases the gains may only be marginal.

Cell outage management: The SOCRATES work has shown that cell outage compensation does work well, but that it does not fully compensate for the cell in outage. Therefore, it will still be necessary for an operator to resolve a cell outage (‘repair’ base station) as quickly as possible.

Operator policies: It has been found that there are often trade-offs in the gains. Often, one metric is improved, while another metric is degraded. As a result of this, defining a clear operator policy becomes important. Operator policy should be defined at a high-level, and this high-level policy should be automatically mapped to SON function policies.

Operator control: It is important that the SON functionality also has a guard function, to enable detection of unexpected behaviour, and reduce the need for operator control.

SOCRATES has defined a possible approach for quantifying the OPEX reduction when SON is introduced in the network. This approach is exemplified via the handover self-optimisation use case.

The SON introduction results in improved network performance, which in turn can lead to CAPEX reduction either via postponing a network investment for later or in the network roll-out phase deploying fewer sites

Relating to network architecture, the most relevant aspects for network architecture are where the SON functionality is located, and what signalling is required between different entities in the network.

SON functionality may also have an impact on standardisation and regulation. However in terms of standardisation at least for the SOCRATES SON functions, we found out that the standardised network and OAM architecture and the standardised interfaces already fulfil most of the requirements, so that SOCRATES impact on standardisation was somehow limited. In border areas between different countries, the fact that SON changes the network configuration needs to be taken into account. This may have an impact on the regulatory is issue of cross-border coordination.

Page 57: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 57 (135)

6 Conclusions Background and Motivation

Future wireless access networks will exhibit a significant degree of self-organisation, as also recognised by standardisation body 3rd Generation Partnership Project (3GPP) [36] and operators lobby Next Generation Mobile Networks (NGMN) [13]. The principal objectives of introducing self-organisation into wireless access networks are to reduce operational and capital expenditure (OPEX/CAPEX) by diminishing human involvement in network operational tasks but also to optimise network performance. The general idea is to integrate network (re-)configuration and optimisation into a single, mostly automated process requiring minimal manual intervention. The primary achievements of SOCRATES include the development of methods and algorithms for self-organisation in LTE networks, comprising self-optimisation, self-configuration and self-healing, and also demonstration of these solutions and the benefits they bring. The work performed has spanned over a wide scope including definition of use cases and requirements for self-organizing networks, assessment methods and criteria for evaluating the developed algorithms and solutions, derivation of standalone SON functions, and finally the development of an integration framework for managing multiple self-organizing functions.

Achievements

As an initial starting point a total of twenty-four use cases [11] were identified covering important aspects of LTE radio network operations such as pre-operational parameter planning, radio parameter optimisation, and cell outage management. Some of the identified use cases were based on standardization activities in 3GPP [36], while others were formulated using NGMN recommendations [13]. However, for most use cases a significantly more detail than was available from these sources was provided, and in addition, several use cases which had not been considered by 3GPP and NGMN were also introduced. Significant advantages of employing SON functionality over current practice has clearly been pointed out, revealing potential for self-organisation in LTE networks. Overall this identification of use cases gives a solid base to identify the most relevant and promising candidates for automation. Further, establishing technical requirements is a key in development of mobile network functions. For this reason, SOCRATES has developed a set of requirements [20] specifically tailored towards self-organizing networks, which are uniquely developed to capture specifics of SON functions and accommodate a wide range of aspects, e.g., performance and complexity, stability, robustness, and scalability. These requirements have been taken into account when developing the SON functions.

Building on the initial studies mentioned above an approach has been formulated and used for the development of SON functions. Following this approach, radio parameters and their dependencies, as well as the contribution from optimisation, for each use case, have been evaluated. Based on the evaluations, one or more algorithms for the considered SON functionality have then been developed and evaluated, where performance, robustness and implementation complexity have been investigated. An important contribution of SOCRATES lays within the development of assessment criteria and reference scenarios [49], which have also been used in the evaluation of the algorithms. The assessment approach proposed and used throughout SON function development, include criteria and methodologies to assess the gains that can be achieved using self-organising networks, and to evaluate and compare the performance of self-organisation algorithms. To this end a set of criteria covering grade of service, quality of service, etc. has been derived. Further, a significant effort has been invested to develop reference scenarios based on real-world network topology, including models for mobility, traffic, propagation, etc. This has resulted in scenarios based on deployments in a European city and using these scenarios has enabled the development of SON algorithms that take into account the intricacies and complexity of real world deployment.

In total nine (including the use case X-Map, which was defined and introduced by SOCRATES) out of twenty-four use cases have been elaborated during the course of the project. Controllability and observability studies were performed, and for most use cases new self-organisation algorithms were developed. For a few of the use cases the results showed that developing self-organising algorithms was not needed. In the use cases where self-organising algorithms were developed, it was seen that most of the time the algorithms could improve the KPI(s) that were considered most important to optimise by the operator policy, at the cost of an acceptable decreased performance of other KPI(s) considered less important.

It is realized that in a future network the involved SON functions will operate in parallel with potential conflicting parameter settings, resulting in poor and instable performance. In order to address these issues, a significant effort has been placed to analyze and simulate the inter-operation between the developed SON functions. Several integrated SON functions have been simulated in order to evaluate their joint

Page 58: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 58 (135)

performance, and the results show that in some cases the SON functions can operate in parallel without any need of coordination, whereas in other cases, in particular between handover parameter optimisation and load balancing, a need of coordination will improve the overall performance. A framework for handling multiple SON functions, referred to as the SON Coordination framework, has been developed enabling the operator to specify high-level performance targets, setting constraints on SON operation, tracking and monitoring SON functions, and facilitating coordination among multiple SON functions when needed. It is worth noting that conflicts between SON functions depend on various factors, for example, the choice of control parameters, time-scale, target and objective of the SON functions, input measurements, etc. The SON functions can either be independently designed and subsequently coordinated (if there is a need to do so), or the coupling between the SON functions can be minimized already from the initial design. As such, a careful design of the SON functions may render no coupling or dependencies at all.

The work within SOCRATES has been published extensively during the last couple of years. A large number of papers have been published at important conferences and workshops. In addition to that a number of articles have been published in journals, both national and international. SOCRATES has arranged two workshops to disseminate project results, the first one together with two other 7th framework projects, E3 and EFIPSANS. The SOCRATES project has further interacted with NGMN by receiving input for use case selection from NGNM and by disseminating results and providing recommendations for SON to NGMN. For more detailed information see deliverable D5.8 [52].

For each SON function developed within SOCRATES, influence on the radio network has been considered, and the need of standardisation has been evaluated. In some cases this evaluation have resulted in standards contributions to 3GPP SA5 [52].

Self-Organising Networks of Today

When concluding the SOCRATES project it is important to mention where we stand today in the SON area compared to what was the case when we started the project in 2008. One of the cornerstones for SON was the strong drive for OPEX/CAPEX savings. Operators recognized the increase in complexity of future wireless systems and possibilities to save by introducing new SON functions in the network. It was mentioned by some operators that when deploying a new network, like LTE, it must be done with no extra personnel than what they already have. In this case a clear OPEX saving. Another cornerstone was performance gains, by introducing automatic radio network functions it was expected to enhance the service quality and availability (robustness, resilience) in the network. These drivers, OPEX/CAPEX savings and performance enhancements, are still very valid. In our work with all the use cases for self-optimisation, self-configuration and self-healing these drivers have been addressed.

For OPEX/CAPEX savings it has been difficult to quantify the savings simply because it’s hard to get information on the actual costs for typically manual work. But, for example in the case of X-maps estimation Section 8.7 we see a clear benefit in the area of minimizing the need for drive tests that clearly saves cost on site visits and man hours for the operator. For performance gains the results obtained in the project has shown that great savings can be achieved through SON functions like, handover optimisation, load balancing and cell outage compensation. But still, for both OPEX/CAPEX savings and performance gains, there is a need to develop better models/methods/tools to be able to accurately measure gains and benefits of SON functions.

Another observation made is related to the interaction between SON functions and the necessity to coordinate the operation if there is a conflicting behavior, mentioned earlier. Even though we have been able to produce interesting results in the coordination cases we believe more has to be done. One of the challenges the desire to avoid explicit coordination/synchronisation among the activities, while unattended, distributed operation is still required. In consequence, the SON system shall exhibit a stable and dependable behaviour without explicitly imposing it. Extensive and detailed simulations will therefore be indispensable to better understand and ascertain the conditions under which a stable and autonomous operation of a SON system is possible and to better define the role that SON coordination shall play.

Impact on Network Management

A valid question is how big is the impact and how far can SON go? This is a question especially interesting for operators since they are constantly looking for ways of being more effective and cost efficient in all their working processes. Some conclusions can be drawn from the results from SOCRATES.

It is evident that when introducing more and more SON functions the control and observation of the network performance will be done on a higher abstraction level, meaning that the operator will steer the

Page 59: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 59 (135)

network behavior with other parameters than today. It could be policies and high level targets that express target levels for the type performance to be offered in the network, e.g., coverage, capacity and quality. Some examples are mentioned in the future work in Chapter 7.

In connection to this, another observation made is, that since the network will be controlled by new and today unknown parameters the competence among operator personnel needs to be high. The introduction of SON will probably lead to less operator personnel required for the network management, while the required personnel need to have a good knowledge of network management and the purpose and consequences of different SON policy settings given as input to the SON functionality. We believe that it will be a challenge to find the correct setting of high level targets/policies to steer the network in the wanted direction.

Together with this, it is of high importance to have excellent tools for network performance monitoring, since one of the cornerstones for the success of future SON systems is to create trust for the automatic functionality (for example, by providing continuous insight into the actual network performance). Even though SON functionality results in a more automated network, it is expected that the operator will request monitoring of the SON actions and control of the SON functions, for example possibility to switch off SON functionality or changing the SON configuration. However, limitations on what the SON functionality should be allowed to do, or which values it may set for a certain parameter, could also limit the capability of the SON functionality. It should be remembered that even though operators may find it assuring, technically it should not be necessary to have the possibility to manually intervene with the SON functionality.

SON is expected to have significant impact on the radio network planning and on optimisation processes, see Chapter 5. Cell individual and responsive parameter optimisation on a short timescale introduced with SON is expected to decrease or even eliminate the need of optimisation tools as used today. Parameter planning that has earlier been performed using planning tools will instead be part of the SON domain. It is however expected that some tasks not addressed by SON, for example site location selection and RF design will still need to be performed using planning tools. Feedback from SON functionality could be used to improve the planning process, for example by using UE measurements to create coverage maps and find areas with bad coverage.

When the implementation of SON functions into LTE networks evolves, it becomes clear that it is not possible to simply convert the current manual operation processes into SON operation processes. In manual operation, performance and fault analysis is carried out by the human operator based on individual experience and knowledge regarding the configuration changes to be made, and their impact on system performance, and the causes and actions can hence not easily be translated into automated functionality. Instead, advanced studies and simulations of the network behaviour are required for the design of SON functionality.

Final Remarks

The work within the SOCRATES project has focused on self-organisation in LTE networks. Up to some extent, the results from the studies are expected to be applicable also to other radio access technologies, e.g. the handover parameter optimisation findings could be used for the 3G system. Other examples are the X-maps estimation results for positioning of mobiles and the cell outage compensation functionality to handle cell outage.

In addition, it should be noted that although energy efficiency as such is considered to be a separate research area than SON, SON is excellent to use for enabling energy efficiency functionality. For example turning on and off radio units or even complete base stations is something that is easily handled by appropriate SON functions. In fact we believe that SON paves the way for energy efficiency since they exhibit the same characteristics and by nature are automatic and needs to react on the dynamic traffic fluctuation.

In addition to rendering important methods and solutions within self-organising networks, the work within the SOCRATES project has, as mentioned earlier, also revealed a number of challenges where further studies are needed. An overview of such challenges and areas of interest for future work is given in Chapter 7.

Page 60: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 60 (135)

7 Future Work The results within SOCRATES show not only the benefits of introducing SON functionality in radio networks, but also reveals areas that still needs to be explored and a number of challenges where solutions needs to be developed in order to fully exploit the possibilities of self-organisation. Examples of such challenges are the quantification of OPEX/CAPEX, and the expression and conversion of operator high-level performance objectives into SON function specific policies. In the following subsections, these challenges and further aspects and opportunities for SON in additional areas that was not explored within the project are discussed.

How to measure gains and benefits

Shown in the results from SOCRATES the possible gains when it comes to performance are quite straight forward. It is easy to compare the performance for a radio feature when a SON function is turn off with the result when its turned on, For example in the case with load balancing it is shown that with the SON function enabled the number of unsatisfied users are clearly reduced.

But, as mentioned in the conclusions section it’s not so evident when it comes to OPEX/CAPEX savings. As of today it has been hard to measure in concrete numbers the benefit for OPEX/CAPEX savings. Though we have elaborated with some possible ways of calculation OPEX savings it has not been done it in a practical example. We believe that this has to be addressed in future research, focusing on methods and algorithms how to calculate OPEX/CAPEX savings.

Policies

The results from SOCRATES show that a careful design of the operator policy is a key when developing SON functions. The policy should be expressive to capture a wide spectrum of possible operator targets, be easily configured in order to meet specific targets of operators, and be verifiable using existing measurements. To this end, there is a trade-off between SON function complexity and the expressiveness of the policy. Network operators will have targets regarding the type of performance to be offered by their network. This may be described by high level performance objectives. The high-level performance objectives generally cover the full scope of performance aspects and, hence, typically consider metrics related to user satisfaction expressed in terms of, e.g., coverage, capacity and service quality. A simplified example could be to maximise capacity while satisfying some minimum constraints regarding coverage and service quality, expressed in terms of, e.g., user throughput. The expression and conversion of the operator’s high-level performance objectives into SON function specific policies has not been fully addressed on SOCRATES and still requires conceptual work. Operators have a clear opinion on what their high-level or business level performance objectives are, and 3GPP standardisation already started the definition of policies for individual SON functions, but there is a gap between these two parts which needs to be filled.

Heterogeneous Networks

Future traffic demands calls for a densification of networks where the usage of micro, femto, and pico cells will increase. This scenario may require optimisation and coordination between the different layers in order to enhance the network performance. The differences in cell size between low power nodes and high power nodes lead to different prerequisites for optimisation. Mobility parameter optimisation and aspects depending on cell size has to some extent already been considered in the self-optimisation of home eNodeB use case in SOCRATES, but additional work is needed. The time duration a UE stay in the cell should be considered in the handover decision and hence also in the optimisation, and in addition preferences on the serving cell based on the properties of the cell needs to be taken into account. Large differences in transmit power of different cells may also cause imbalance between uplink and downlink coverage. Further, heterogeneous deployment will lead to an increased number of cell edge users close to interfering neighbour cells. These challenges need to be considered in the development of SON functionality in order to prepare for, and to some extent also enable denser network deployment.

IRAT

The SOCRATES project has focused on LTE as the underlying radio access technology on which self-organization is developed. Extending the scene to inter-RAT brings additional complexity and opportunity for self-organization. IRAT self-organization can not only exploit the differences in the characteristics of the RATs but also more efficiently utilize the resources among the RATs. Two RATs may have different characteristics in terms of coverage and capacity. For example, a GSM network deployed over 900 MHz may have a higher speech coverage compared to an LTE network deployed over 2.6 MHz, whereas the LTE network can provide a much higher throughput.

Page 61: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 61 (135)

This is rather pronounced when considering for example IRAT load balancing, where the load can be balanced between different RATs, based on load and RAT capability. Further, coverage and capacity optimisation can also take the IRAT view where instead of having overlapping coverage areas, the GSM network can serve the cell edge users whereas the LTE cell can serve users closer to the site and thus provide a higher capacity (since spectral efficiency is improved).

Another example is handover parameter optimisation, a function that so far has been studied within the LTE context in SOCRATES. It is, however, clear that LTE will be rolled out in locations where a higher capacity is needed and, therefore, islands of LTE networks with overlaying GSM and/or WCDMA umbrella networks is a likely scenario. IRAT handover thus becomes an essential functionality when users move in and out of the LTE coverage area and this calls for functions that automatically tune IRAT handover parameters in order to optimise handover performance according to the particular characteristics of IRAT handovers.

The outcome of the SOCRATES project clearly shows the benefit of self-organization for LTE networks. One of the next natural steps is to broaden the scope of self-organization to cover other RATs in the mobile network ecosystem and address IRAT functionalities that are essential to provide an all seamless mobile broadband experience.

Intelligent and Pro-Active Fault Management

With the increasing complexity of mobile networks, a need for automated mechanisms for reliable root causes analysis of failures and performance degradations caused by hardware and software errors arises. Today, troubleshooting is carried out by highly qualified experts relying on alarms and performance indicators to find the causes of performance degradations. This can be a rather time-consuming process, which involves detailed performance analysis, correlation of data, and possibly drive tests. There are tools available for detecting performance degradations and to find the causes of problems. These tools are, however, rather rudimentary and require significant configuration, e.g., to setup thresholds for performance indicator. As such, the knowledge of experts needs to be encoded in such tools and, further, as the network evolves, these configurations need to be updated accordingly. This calls for a new paradigm where root cause analysis tools are highly automated and self-learning, and continuously adjust to prevailing conditions, such as topology, load, user behavior etc.

Service-Centric Management

The self-organization paradigm has up to now focused mainly on aspects related to radio network performance and quality. Therefore, the impact of network optimisation on the end user has to a large degree been rather implicit and not explicitly managed and optimised. For example, network performance has been largely defined in terms of cell-edge throughput, coverage, delay etc, which although at some level relate to the end user experience, fail to explicitly capture the observed service performance. This calls for a new paradigm which puts the end-user experience in the focus, thereby shifting the goal from network self-organization to end-user performance self-organization. The SON functions developed in SOCRATES and in the community as a whole, have so far been network-oriented with the goal to optimise traditional objectives as discussed above.

Taking the service view brings new challenges to the picture, which are best illustrated through the following examples. The SOCRATES project has developed a method for optimising handover performance in terms of a weighted sum of radio link failure ratio, handover failure ratio, and ping-pong ratio. These are typical and traditional network-oriented metrics, which relate to the network performance but fail to explicitly capture the impact on the end-user perspective, e.g., the impact of a radio link failure on a speech user and the impact of ping-pong ratio on an FTP user. Further, power and tilt optimisation has previously sought to optimise network metrics related to coverage, capacity etc and does not reflect the end-user perspective which is an intricate function of whether the user is covered and provided with a service (e.g., video call, file download), and the quality of the provided service (e.g., call quality and throughput). A key question in this regard is whether it is more important to provide higher quality to less number of users or to provide lower quality to higher number of users. This tradeoff can be managed through a set of different facilitators that directly impact QoS, e.g., scheduler and admission control, but potentially also other parameters, e.g., antenna tilt. Determining optimal settings for these functions depends on how the end-user perceives the quality of services which is a function of a set of key factors, e.g., bit rate, delay, whether service is provided or not.

Service centric SON brings new challenges to the arena with respect to understanding the end-user perception of service quality, but also from an architectural point of view. Traditionally, the ambition has been to decouple the radio network and services by providing an interface between the radio and core networks. Services are mapped to QCI classes, each having distinct requirements in terms of delay etc,

Page 62: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 62 (135)

whereby the radio network assigns resources with the goal to fulfill these requirements. The same ambition should be pursued also for service-centric management but the interface between the core and radio network should be enriched with additional factors that will to a larger degree capture the end-user perception, e.g., the end-user perception when being denied a particular service.

Business aspects

Another aspect that can be considered in connection with SON is the business aspects, here called revenue-aware networks. We see an exponentially increase in data traffic with the emergence of data-intensive applications, such as P2P, or on-demand video streaming. This increasing demand has been met by introducing mobile broadband and the development of LTE and LTE-Advance. The introduction of heterogeneous networks involving massive deployment of micro and pico cells is yet another way to handle the data explosion. Although, the introduction of Heterogeneous Networks (HetNets) is one feasible future scenario, the financial incentives of HetNets are not fully understood, as the deployment of micro and pico cells brings forward increasing costs related hardware, site leases, energy consumption etc.

There are several ways for operators to maintain their competitive edge, for example, by providing new services, reducing OPEX through process improvements and SON, but also through CAPEX reductions. Several studies show that CAPEX represents a prominent part of operator’s expenditures. A viable future scenario is to make greater use of the infrastructure by intelligent radio resource management taking the user’s view on service quality into account (see previous section) and distributing resources with the aim to improve the perceived quality for the worst-case users at the cost of small quality reduction for the best users. This may postpone investments in new base stations, thus, reducing CAPEX.

Including revenue income into radio resource management will bring this line of approach of handling the data explosion one step further. It involves understanding the operator tariff and estimations of service resource demands and prioritise and adjust QoS not only according to perceived user quality, but also, according to service income with the aim of maximizing profits.

To what degree the network needs to be revenue-aware depends on various factors. One key factor is the willingness of the operators to continue evolving and expanding their networks. Clearly, if the network expansion is not a major issue as a result of, e.g., relatively low CAPEX and OPEX related to network evolution, then the motivation of a revenue-aware network becomes less prominent. A second key aspect is whether the majority of the services will be over-the-top (OTT) and outside the reach of the operator or whether operators will manage them. If the majority of the services are OTT, then there are fewer incentives for the operator to embrace a revenue-aware approach.

 

 

 

Page 63: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 63 (135)

8 Detailed Description of Individual SON Functions A key contribution of the SOCRATES project has been the development of algorithms and methods for self-organisation. In this chapter, the focus is on the stand-alone SON functionalities for a selection of use cases:

Self-optimisation: admission control parameter optimisation, packet scheduling parameter optimisation, handover parameter optimisation, load balancing, interference coordination and home eNodeB

Self-healing: cell outage management and X-map estimation

Self-configuration: automatic generation of initial parameters for eNodeB insertion

Note that within the SOCRATES project, the use case X-map estimation has been classified under self-healing. Strictly speaking, X-map estimation is however not a typical SON use case, but rather a support function for different SON use cases.

Short high-level descriptions and conclusions per use case were already given in Chapter 3. In this chapter, extended abstracts of the work performed and the results obtained for each use case are given. Each use case presentation in this chapter follows a similar structure. First a description of the principal objective of the use case is given. Then the relevant KPIs and control parameters are described, after which the main results of the performed observability and controllability study are presented. Next the basic idea behind the developed algorithm(s) is explained, the most important results of the use case are presented and conclusions are drawn.

8.1 Admission Control Parameter Optimisation

8.1.1 Description of the Use Case

The admission control algorithm is the algorithm that decides if a call request will be admitted to a cell or not. It bases its decisions on the availability of the resources needed to guarantee the required QoS of the new call, while maintaining the QoS of the already accepted calls. The admission control algorithm provides users with access to the services offered by the network, it proactively attempts to prevent QoS degradation of the accepted calls, and it aims to prevent unnecessary rejecting of handover requests and fresh calls, to not limit the revenue of the network provider. Because the rejection of an ongoing call that is handed over is considered more annoying to the user than the rejection of a fresh call, admission control algorithms will typically prioritise handover (HO) calls over fresh calls.

The admission control parameter optimisation function has the objective to auto-tune the parameters of the admission control algorithm in response to the many uncontrollable and inevitable uncertainties (changes in cell capacity, in radio conditions, in offered traffic load, in offered traffic mix, in user mobility, etc.), which will influence the efficiency of the admission control algorithm. By this auto-tuning of the parameters, the number of calls admitted to the network should be maximised, while the QoS of the admitted calls should still be met with significant likelihood.

The principal objective for this use case is the development and evaluation of an optimisation algorithm (SON algorithm) for admission control. Results for this use case are obtained using simulations, in which only the downlink direction is considered.

8.1.2 KPIs and Control Parameters

In order to measure the admission control performance, four metrics (KPIs) are considered. These metrics are divided in two types: grade-of-service (GoS) metrics and quality-of-service (QoS) metrics.

Grade-of-service metrics give information on the performance associated with call accessibility. They are metrics on the rejection or acceptance of calls by the admission control algorithm. The considered GoS metrics are:

The rejection ratio of the fresh calls: the fraction of fresh calls that is rejected by the admission control algorithm.

The rejection ratio of the handover calls: the fraction of handover calls that is rejected by the admission control algorithm.

Quality-of-service metrics give information on the performance associated with the traffic that is handled by the network, i.e., on the traffic that is generated by the calls that are accepted into the network by the

Page 64: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 64 (135)

admission control algorithm. Because real-time and non-real-time calls have different performance requirements, a different QoS metric is considered for real-time and non-real-time calls:

• The traffic loss ratio: the fraction of traffic of real-time calls that is dropped. When the time the traffic has spent in its eNodeB buffer exceeds the maximum allowed delay for that traffic type, it will be dropped from that buffer. Only this cause of traffic loss is considered.

• The low call throughput ratio: the fraction of non-real-time calls with a call throughput smaller than the minimum call throughput requested to the packet scheduler.

As reference admission control algorithm, an algorithm which prioritises handover calls over fresh calls using a parameter ThHO is considered. This parameter (whose value always lies between 0 and 1) determines the fraction of the cell capacity that is reserved for handover calls. When the fraction of the cell capacity that is in use rises above this threshold, only handover calls will be admitted to the cell and the algorithm will reject fresh calls for as long as the condition holds. A flowchart of the algorithm is shown in Figure 21. To estimate the cell capacity C(k), the method described in [34] is used.

Figure 21: Flowchart of the reference admission control algorithm. Notations: t: time of call

arrival; C(k): most recent estimate of the cell capacity; creq: required capacity of the arriving call; c*(t): required capacity of the already accepted calls

The key control parameter of the reference admission control algorithm described above is the parameter ThHO.

8.1.3 Observability and Controllability In order to determine how an admission control optimisation algorithm should auto-tune the ThHO parameter of the reference admission control algorithm, a sensitivity analysis was performed on the ThHO parameter. The sensitivity analysis was performed using a dynamic system-level simulator featuring a single cell. Calls arrive at a certain rate, according to a Poisson process. A certain fraction of these calls are handover calls, all other calls are fresh calls. As mobility model the random waypoint model is used. UEs generating a handover call start at a random location on the border of the cell, while UEs generating a fresh call start at a random location in the cell. Three types of calls are considered: voice and video calls (real-time calls) and web calls (non-real-time calls).

In the sensitivity analysis, the performance of the reference admission control algorithm for various settings of the ThHO parameter was examined for a variety of call arrival rates and fractions of handover calls. Figure 22 through Figure 25 show the results for a scenario with 30% handover calls and a call arrival rate which increases from 0.6 to 1.3 calls/s, for various values of ThHO. From Figure 22 it is observed that if the call arrival rate is large, ThHO has quite some impact on the rejection ratio of the handover calls; the more ThHO is lowered, the less handover calls are rejected. For a small call arrival rate, the influence of ThHO on the rejection ratio of the handover calls is minor. However, as Figure 23 shows, ThHO has a significant impact on the rejection ratio of the fresh calls; the lower ThHO, the higher the rejection ratio of the fresh calls. The impact of ThHO on the rejection ratio of the fresh calls grows if the call arrival rate grows, but is already fairly large for the smallest call arrival rates considered. The QoS results show the same trends as the results on the rejection ratio of the handover calls. As Figure 24 shows, for small call arrival rates only very few web calls get a data call throughput below the minimum target of 250 kbit/s, and the exact setting of ThHO does not matter much. The larger the call arrival rate, the more important ThHO becomes for the QoS of the non-real-time calls; a lower ThHO results in a better QoS. The same holds for the QoS of the real-time calls (see Figure 25). However, if the real-time calls experience QoS deterioration, it is much less than the non-real-time calls, which is because the packet scheduler considered in the simulations takes the packet due dates of the real-time packets into account, so it is the non-real-time traffic that suffers first during a temporary overload situation.

Page 65: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 65 (135)

Figure 22: The rejection ratio of the handover calls, for various call arrival rates and various

values of ThHO

Figure 23: The rejection ratio of the fresh calls, for various call arrival rates and various values

of ThHO

Figure 24 through Figure 27 show results for various values of ThHO and for an increasing fraction of handover calls, in a scenario with a call arrival rate of 0.8 calls/s. Figure 24 lustrates that for small fractions of handover calls, the value of ThHO does not have much influence on the rejection ratio of the handover calls, and Figure 26 shows that the same holds for the fraction of web calls that do not get their requested call throughput. With an increasing fraction of handover calls, ThHO soon becomes important for these performance metrics. From 70% - 80% handover calls on, the value chosen for ThHO turns out to be very important. Again, ThHO has quite some impact on the rejection ratio of the fresh calls, as is illustrated in Figure 25 a low ThHO results in a high rejection ratio of the fresh calls. Figure 27 shows that the influence of ThHO on the traffic loss ratio of the real-time traffic is rather minor.

Figure 24: The rejection ratio of the handover calls, for various fractions of handover calls and

various values of ThHO

Figure 25: The rejection ratio of the fresh calls, for various fractions of handover calls and

various values of ThHO

Page 66: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 66 (135)

Figure 26: Fraction of non-real-time calls with a call throughput smaller than 250 kbit/s, for

various fractions of handover calls and various values of ThHO

Figure 27: Traffic loss ratio of the real-time traffic, for various fractions of handover calls

and various values of ThHO

8.1.4 Algorithmic Approach

The results presented in the previous section illustrate that changes in the environment, and consequently in the measured performance, might require opposite adaptations of ThHO, depending on which performance measure is considered. A compromise on this trade-off should be made by the network operator, and the optimisation algorithm should take the chosen policy into account. In this work, the following operator policy is considered, where the items are ranked in order of priority: (i) the first aim should be to guarantee the QoS of the already accepted calls; (ii) the second aim should be to have a low rejection ratio of the handover calls; (iii) the third aim should be to have a low rejection ratio of the fresh calls.

Figure 28 shows the self-optimising algorithm for ThHO that has been defined. At regular time instants t = k, with k an integer and a time interval (typically = 1 minute will be considered), the following measurements are collected: M_QoS_RT(k), M_QoS_NRT(k), M_GoS_HO(k), and M_GoS_fresh(k), which are defined as respectively the traffic loss ratio of the real-time traffic, the fraction of non-real-time calls for which the call throughput is smaller than the minimum call throughput requested to the packet scheduler and the rejection ratio of the handover calls and the rejection ratio of the fresh calls, in the time interval [k;(k+1)[. Note that all these measurements will result in a value between 0 and 1. As there will be fluctuations in the measurements, they are smoothed using exponential smoothing, controlled by a smoothing parameter αSON. The smoothed measurements are denoted by QoS_RT(k), QoS_NRT(k), GoS_HO(k), and GoS_fresh(k) respectively.

QoS_RT, QoS_NRT, GoS_HO and GoS_fresh are threshold values to which the smoothed measurements are compared, and when they are exceeded, a change of ThHO might happen. The threshold values should be chosen between 0 and 1. Note from Figure 28, lines 1-3, that ThHO will be lowered if the smoothed measurement value of the QoS of the real-time calls, or of the QoS of the non-real-time calls, or of the rejection ratio of the handover calls, exceeds the corresponding threshold, i.e., if a bad QoS or rejection ratio of the handover calls is experienced. If all these values are lower than 90% of their corresponding thresholds, and the rejection ratio of the fresh calls is larger than GoS_fresh, then ThHO will be increased (lines 5-7). The motivation for increasing ThHO comes from a too high rejection ratio of the fresh calls. However, by allowing such an increase only if the QoS of the ongoing calls and the rejection ratio of the handover calls are below 90% of their thresholds, and by taking smaller steps for increasing ThHO than for decreasing it, the policy to first aim at a good QoS and low rejection ratio of the handover calls before targeting a low rejection ratio of the fresh calls is pursued.

Page 67: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 67 (135)

1 if ( QoS_RT(k) > QoS_RT OR QoS_NRT(k) > QoS_NRT

2 OR GoS_HO(k) > GoS_HO ) 3 ThHO = max(ThHO - 0.1 , 0); 4 else

5 if ( QoS_RT(k) 0.9 QoS_RT AND QoS_NRT(k) 0.9 QoS_NRT

6 AND GoS_HO(k) 0.9 GoS_HO AND GoS_fresh(k) > GoS_fresh ) 7 ThHO = min(ThHO + 0.05 , 1); 8 end 9 end

Figure 28: Pseudo-code for the self-optimising algorithm for ThHO

8.1.5 Results

In this section, results obtained with the self-optimising algorithm for ThHO presented in the previous section are shown. Figure 29 through Figure 32 show results obtained for a scenario where, during the simulation, a change of the call arrival rate and the fraction of handover calls occurs. Before the change, calls are generated at a rate of 0.6 calls/s, and 30% of the calls are considered to be handover calls. After the change, the call arrival rate is increased to 1 call/s and 60% of the calls are considered to be handover calls. The figures compare results obtained with the self-optimising algorithm for ThHO (SON), and results obtained with the static algorithm (no-SON). From these figures it is seen that:

Before the change (dark blue bars), with the no-SON algorithm (eight right-most groups of bars), a good QoS for the ongoing calls (Figure 29 and Figure 32) and a low rejection ratio of the handover calls (Figure 29 is obtained for all fixed settings of ThHO. The SON cases (two left-most groups of bars) perform equally well when considering these performance measures before the change. Also with respect to the rejection ratio of the fresh calls (Figure 30), the SON cases result in good performance before the change, while for the no-SON cases this is only the case for high settings of the ThHO value (ThHO ≥ 0.8). So when putting these results together, it is concluded that with the static algorithm, before the change the best performance is obtained with a high ThHO value. The SON algorithm performs equally well or slightly worse (rejection ratio of the fresh calls) than these best no-SON cases. The reason that SON performs sometimes slightly worse is because before the SON algorithm will change ThHO, first some bad performance needs to be measured, and of course this bad performance is also included in the results.

After the change (light green bars), with the no-SON algorithm, the best QoS and rejection ratio of the handover calls results are obtained with ThHO ≤ 0.6. Again, the SON algorithm succeeds in obtaining equally good or slightly worse (for the same reason as explained before) results for these performance measures than these best no-SON results. Compared to the no-SON algorithm with ThHO ≥ 0.7, the SON algorithm gives better results after the change for the QoS results and the rejection ratio of the handover calls. When looking at the results for the rejection ratio of the fresh calls after the change, it is seen that the no-SON cases with a high setting of ThHO give the best results. However, it were exactly these no-SON cases which gave the worst results for the QoS and the rejection ratio of the handover calls after the change. The no-SON cases with smaller ThHO settings that resulted in the best QoS and rejection ratio of the handover calls, give the worst results for the rejection ratio of the fresh calls. The SON algorithm gives better results for the rejection ratio of the fresh calls than the no-SON cases with ThHO ≤ 0.6, and worse results than the no-SON cases with higher ThHO values. The results illustrate that the SON algorithm pursues a good QoS of the ongoing calls and a low rejection ratio of the handover calls, while this is impossible to achieve with the static algorithm with a high ThHO value. The no-SON algorithm with a lower ThHO value also pursues the operator policy, but worse performance is obtained with these no-SON cases than with the SON algorithm, both before and after the change, with respect to the rejection ratio of the fresh calls.

SON results have been collected with different smoothing values (αSON = 0.75 and 0.90). The obtained results with both values for αSON turn out to be rather similar, the main difference is noticed in the results for the rejection ratio of the fresh calls after the change. The higher αSON value of 0.90 seems better in this case.

Page 68: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 68 (135)

Figure 29: The rejection ratio of the handover calls, with and without self-optimisation of ThHO

Figure 30: The rejection ratio of the fresh calls, with and without self-optimisation of ThHO

Figure 31: The fraction of the non-real-time calls with a call throughput smaller than 250 kbit/s, with and without self-optimisation of ThHO

Figure 32: The traffic loss ratio of the real-time traffic, with and without self-optimisation of ThHO

Page 69: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 69 (135)

8.1.6 Conclusions

The presented results illustrate that overall, the defined admission control optimisation algorithm succeeds better in complying to the defined operator policy, both before and after the change, than the static algorithm with a fixed ThHO value. The reason for this is that a fixed setting of ThHO which results in good compliance of the measured performance to the operator policy when the system is in a specific state (i.e., a certain mix of handover/fresh calls and a certain call arrival rate), is likely to lead to poor results when the system is in another state. With the optimisation algorithm, ThHO can evolve to a new good value after changes to the incoming traffic because this algorithm auto-tunes ThHO based on performance measurements.

8.2 Packet Scheduling Parameter Optimisation

8.2.1 Description of the Use Case

One of the key radio resource management mechanisms in 3G+ mobile networks is the packet scheduler, which coordinates the access to shared channel resources. In OFDMA-based LTE systems this coordination considers two distinct dimensions, viz. the time dimension (allocation of time frames) and the frequency dimension (allocation of subcarriers). The main challenge in designing packet schedulers is to optimise resource efficiency (e.g., by exploiting multi-user and frequency diversity), while satisfying the users’ QoS requirements and achieving some degree of fairness. Many packet scheduling schemes for mobile access networks have been proposed and implemented of which the so called Proportional Fair (PF) scheduler is probably the most well known, see, e.g., [29][30]. It explicitly addresses the trade-off between efficiency, QoS and fairness, which can be tuned by a single parameter α, 0 ≤ α ≤ 1, where α is the exponential smoothing parameter used in the user-specific averaging of the experienced data rates.

An important issue is how packet scheduling performance depends on actual system and traffic characteristics regarding various user, traffic and environment characteristics, e.g., shadowing and fast fading, mobility, file size distribution, traffic mix, etc. A particular relevant question in this light is how the ‘optimal’ setting of the scheduling parameters has to be adapted when one or more of these system or traffic conditions change over time.

The first aim of studying the use case is to get more insight into the sensitivity of the optimal parameter setting for packet scheduling in LTE. For that purpose we analyse the optimal parameter settings of a particular ‘reference’ (downlink) packet scheduling algorithm under different traffic and environment conditions. The reference packet scheduler, containing elements of proportional fairness and packet urgency, supports mixes of real-time and non real-time traffic. Besides the parameter α of the ‘PF-part’ of the scheduler it contains a second parameter (ζ) that can be used to tune the relative importance of the ‘proportional fairness’ and ‘packet urgency’ elements in the scheduling decision. As will be indicated, the observed sensitivity is not overly large, which renders the (otherwise) second aim of this study, viz. development of self-optimisation algorithms, unnecessary.

8.2.2 KPIs and Control Parameters

We first describe the considered reference packet scheduler, which was inspired by algorithms encountered in the literature, in particular those in [31][32].

The scheduling principles proposed in these papers have been extended from a pure time-domain focus to cover both time- and frequency-domain scheduling, as is relevant when applied in the LTE context. The scheduler supports both real-time and non real-time services, and in that light contains elements of proportional fairness (aiming at resource efficiency and fairness) and packet urgency (for adequate support of delay-sensitive services). The reference packet scheduler assigns in each TTI (time domain: 1 ms granularity) a priority level for every subchannel (frequency domain: 180 kHz granularity) to the head of line (HoL) packet of every non-empty user buffer, taking into account the potential bit rate at which the user can be served on the different subchannels (based on channel quality feedback from the user), as well as the experienced and maximum tolerable delay of the packet.

The scheduler comprises two steps. In the first step, priority levels are calculated for each combination of user and subchannel. In the second step these priority levels are applied in the actual assignment of subchannels to users. At time t, the priority level Pi,c(t) assigned to user i’s HoL packet associated with subchannel c is calculated according to the formula

,, , . 1

Page 70: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 70 (135)

The notation is explained in Table 4. In the above formula, the first component is the so-called ‘channel adaptivity’ factor and reflects a proportional fairness scheduling principle. Regarding this component, at TTI t, Ri(t) is updated for each user i according to the formula

1 1 1  

where Ri(t) is initialised at the aggregate bit rate at which user i can potentially be served at the time of call creation, assuming all subchannels are available. The second component of Pi,c(t) is the packet ‘urgency factor’. The parameter ζ ≥ 0 allows setting the relative importance of the channel adaptivity (i.e., efficiency) and the packet urgency components.

Table 4: Notation

Ri,c,potential(t) The potential bit rate at which user i can be served on subchannel c at TTI t. These rates are based on a discretisation of the SINR-to-rate mapping presented in [33].

Ri(t) The exponentially smoothed average bit rate at which user i has been served, aggregated over the subchannels that have been assigned to user i, at TTI t.

Ri(t) The bit rate at which user i is served, aggregated over all subchannels assigned to user i, at TTI t.

α Exponential smoothing parameter, used for the smoothing of Ri(t).

Ti Maximum allowed delay for packet associated with user i (Ti = ∞ if user i is a non real-time user)

Wi(t) Delay experienced by HoL packet of user i at TTI t, i.e., the present time minus the packet’s arrival time in the buffer. If Wi(t) > Ti the packet is dropped (real-time sessions only).

ζ Scheduling parameter that affects the relative importance of the ‘channel adaptivity’ and ‘packet urgency’ components of the priority level function.

service

Service-specific requested bit rate, i.e., the minimum throughput target for non real-time sessions or the fixed bit rate for real-time sessions. The purpose of this ‘correction factor’ is to prevent the scheduler from giving undesiredly high preference to real-time sessions, which may typically experience lower throughputs due to their limited source rate, rather than due to any unfairness in the scheduler.

We have developed a heuristic procedure for the assignment of subchannels, based on these priority levels. Due to a lack of space we reproduce here only the main principles (details can be found in [54]), i.e., (i) the assignment of a given subchannel to the user with the highest priority on that subchannel; and (ii) in order to comply with the uniformity restriction that multiple subchannels assigned to a given user must use the same MCS (modulation and coding scheme), all subchannels assigned to a given user are jointly considered and a common MCS is selected which maximises the aggregate bit rate for that user; subchannels that are potentially released in this step can be reassigned to other users.

The key control parameters of the described packet scheduler are and , and it is these scheduling parameters that will be addressed in the sensitivity study.

The Key Performance Indicators (KPIs) of interest are the 10th percentile of the call throughput at the cell edge for non-real-time calls (file download), and the 90th percentile of the packet loss at the cell edge for real-time calls (video telephony; a packet is considered lost if it is not delivered before a deadline, which is directly based on the maximum allowable packet delay). Besides these service quality metrics, we also consider cell capacity, defined as the maximum supportable cell load for which service-specific performance targets can still be guaranteed. For the non-real-time (data) service we require that the 10th cell edge throughput percentile is higher than 500 kbit/s, while for the real-time (video) service we require that the 90th percentile of the cell edge packet loss probability is lower than 5%.

8.2.3 Observability and Controllability

In this section we confine ourselves to the presentation of a sensitivity study, in order to assess to what extent the optimal packet scheduling parameters depend on one or more of the considered traffic or environment characteristics.

Page 71: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 71 (135)

A dynamic system-level simulator has been developed for studying the packet scheduler in the LTE OFDMA downlink. These are the main characteristics of the simulator:

We consider a hexagonal layout of twelve sectorised sites, comprising three sectors each, with an inter-site distance of 2.23 km. A bandwidth of 5 MHz is assumed.

The applied propagation model comprises three parts: distance dependent path loss (COST 231-Hata), shadowing and multipath fading. Shadowing is modelled with a standard deviation of σ (default setting σ = 9.4 dB), an intra-site correlation of 1 and an inter-site correlation of 0.5. The default multipath environment is a PedestrianA model with a fading velocity of 3 km/h.

Arrivals of non-persistent (finite) sessions of two distinct services (file transfer and video telephony) are governed by Poisson arrivals. Data traffic is mainly characterised by the arrival rate, the file size (lognormally distributed) and its elastic nature. The default average file size is 500 kbit and the coefficient of variation is 1. The reference scenario is a data-only scenario. Video telephony sessions are characterised by the session arrival rate, a fixed bit rate of 110 kbit/s, an exponentially distributed duration with a mean of 10 s and packet delay budget of 150 ms. Packets that cannot be delivered within this delay budget are dropped by the base station and hence contribute to the experienced packet loss. Upon generation of a new session, the location of the corresponding user is sampled, either from a uniform distribution (default setting) or from hot spots situated half-way between the sites and the cell edges.

We have carried out a thorough sensitivity analysis to assess to what extent the optimal settings of the scheduling parameters, α and ζ, depend on the following aspects:

Data traffic characteristics – The average file size and the coefficient of variation of the file size distribution are varied;

Multipath fading environment – We compare scenarios without multipath fading, a PedestrianA channel model with a fading velocity of 3 km/h and a VehicularA model at 30 km/h;

Variability of the average signal strengths among calls – This variability depends on the considered spatial user distribution (see above) and the assumed shadowing parameter, for which values of σ equal to 0, 9.4 and 14 dB will be considered;

Service mix – We consider two types of traffic: file transfer and video telephony. The relative fraction of the offered traffic load (in kbit/s) for the video and data services is varied.

The reference scenario is a data-only scenario. Hence the packet urgency component of the scheduler equals 1 because the packet delay budget Ti is infinite for packets of non real-time services. Therefore we will only study the sensitivity of the parameter α.

In Figure 33a the 10th percentile of the call throughput versus cell load is shown for the reference scheduler and three α values. As reference, the results for a maximum SINR and a round robin scheduler are also plotted. For high loads, the reference scheduler performs better than the maximum SINR scheduler. This scheduler is the most efficient in terms of spectrum use, but it is not fair. The round robin scheduler performs worst. The reason is that the scheduler is not channel-aware and hence not very efficient.

In the remainder of this section we consider an operator policy which tries to guarantee a minimum performance for the worst calls, measured in terms of the 10th percentile of the call throughput at the cell edge, equal to 500 kbit/s. We will concentrate on the maximum supportable cell load for which this performance target can still be guaranteed. Figure 33b illustrates the maximum supportable load for the reference scenario for three different α values and the maximum SINR and the round robin scheduler. α equals 0.01 gives the best results.

Consider now the impact of data traffic characteristics. Figure 33c shows the maximum supportable cell load for different α values, for three average file sizes (500 kbit corresponds to the reference scenario, 50 and 5000 kbit). For small file sizes (50 kbit), α = 0.1 is the optimal setting. However the results are not shown because the performance target of 500 kbit/s is not reached, even for very low loads. For large file sizes, α = 0.001 is the optimal setting. The larger the file size, the smaller the optimal α value. The explanation is that a small α means a larger window size in the average exponential smoothing (more weight is given to the history). Hence, for large file sizes, where more history can be taken into account to achieve fairness, a lower α value than in the reference scenario is optimal.

Page 72: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 72 (135)

Figure 33: Analysis of packet scheduling parameters

In Figure 33d the impact of variations in the coefficient of variation of the file size is shown. When the coefficient of variation equals 0, 1 (reference scenario) and 2 the optimal value of α is α = 0.01. For coefficient of variation 4 the optimal α value is 0.1. This can be explained as follows. When the coefficient of variation equals 4, most files will be small files and there will be few large files. For the exponentially smoothed average, a large window size (α close to zero) tries to achieve fairness over a period of time which is much larger than the file download time. Therefore a smaller window size (α value close to 1 instead to close to 0) is needed to achieve the required fairness in the case with many small files (i.e., large coefficient of variation). Simulation results show that for coefficient of variation 4 the optimal α value is 0.2.

Consider now the impact of the multipath fading environment (see Figure 33e). For both the reference scenario (PedestrianA, 3 km/h) and no multipath α = 0.01 is the optimal setting. A remarkable result in the scenario with a VehicularA channel model and a fading velocity of 30 km/h, is that the maximum

(a)

(c)

(e)

(g)

(b)

(d)

(f)

(h)

(a)

(c)

(e)

(g)

(b)

(d)

(f)

(h)

Page 73: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 73 (135)

supportable load is much higher than in the reference scenario. This can be explained as follows. A higher velocity may affect the throughput results in two ways. The positive effect is that there is more variability of the channel conditions per time unit. This allows calls to experience good channel conditions more often. For small to medium-sized flows this may result in lower download times. The negative effect is due to estimation errors on the SINR; the higher the velocity, the more significant is the error. However, as the simulation model does not take into account this estimation error, only the positive effect remains. As a consequence, the supportable cell load is higher than in the reference scenario.

Consider the impact of the differences in the average signal strength among calls. Figure 33f shows the sensitivity w.r.t. differences in the average signal strength. ‘Few’ differences corresponds to the case with users situated around a hot spot centred half way between the site and the cell edge and no shadowing. ‘Medium’ differences refers to the reference scenario. The label ‘Many’ differences is used in the case with uniform distributed users and shadowing with σ = 14 dB. When there are few differences in average signal strength among calls, those are purely due to the multipath fading. Then there are no calls which structurally experience worse channel conditions than the others. Therefore, in that case fairness is not really an issue and the maximum SINR scheduler gives the best results. When considering the reference scheduler, α = 0.001 is optimal. When there is much difference in average signal strength between users, α = 0.1 performs slightly better than other α values. High variability of the signal strength results in high variability in download times. In this case the optimal α is a value close to one (i.e., fairness achieved at a small time scale)

So far we have considered unilateral variations with respect to the reference scenario. Simulation results show that by combining several variations with respect to the reference scenario the performance in terms of maximum supported load is a little more sensitive to the choice of α. For example, the scenario with large file sizes (5000 kbit), coefficient of variation of the file size 0, VehicularA with a fading velocity of 30 km/h, and users placed around hot spots half-way the sites and the cell edges, the optimal α value is 0.001. But even in this extreme scenario, the maximum supportable load with the optimal α value is just 2% higher than with α = 0.01. We can conclude that in data only scenarios α = 0.01 is overall a good choice and that this setting is fairly robust to the studied variations.

Consider the impact of the service mix. In these simulations we consider both non real-time and real-time services: file download and video telephony, respectively. In these scenarios both scheduler parameters α and ζ are relevant. In order to limit the number of scenarios, we will consider a fixed α equal to 0.01, based on the findings for the data-only scenario, and concentrate on the sensitivity of the parameter ζ. The performance of video telephony calls is measured in terms of the 90th percentile of the packet loss for cell edge calls. The performance target for these calls is 5%. For data calls the same performance target is used as before. We vary the percentage of video telephony load: 25%, 50%, 75% and 100%. Figure 33h illustrate the maximum supportable load for data downloads and video telephony for service mixes of 25% and 75% respectively. The maximum supported cell load shown on the vertical axis is noted to be the maximum aggregate (video plus data) load than can be supported, from the perspective of satisfying either the video or data service’s quality target (two distinct sets of bars). For relatively low video telephony loads (25 and 50%), the video performance improves as ζ increases, while there is little performance degradation for data calls. For relatively high video loads (75 and 100%), the optimal ζ is a value between 0.25 and 1. This can be explained as follows. Higher ζ values increase the relative importance of the packet urgency component of the scheduler. If this becomes too dominant, fewer scheduling decisions are actually channel-adaptive, which in turn makes the system less efficient. This explains that for relatively high loads the maximum supportable load for video decreases as ζ increases.

Considering the two services, the maximum supportable load for the service mix is determined by the most restrictive service, which in our case is mostly file download. The optimal ζ is a value between 0.25 and 1. In the studied case, it can be said that the optimal ζ setting is little sensitive to the service mix. However it should be remarked that other simulation results show that the sensitivity of ζ is dependent on the considered performance targets for both data and video. For example, if the performance target for video telephony is stricter (for instance 1% instead of 5%), then the video telephony performance may become the limiting factor. In that case, the optimal value of ζ is more sensitive to the traffic mix: 3, 3, 2 and 1.5 for respectively 25, 50, 75 and 100% video telephony load. We do not present these results due to a lack of space.

Concluding the sensitivity study: on the potential gain of self-optimisation. From Figure 33a-f it can be concluded that α = 0.01 is the best setting if the parameter settings of the scheduler were fixed. We want to compare this case with the case in which the scheduler is self-optimised. Our assumption is that the self-optimised algorithm is able to use the optimal α setting in every situation. The potential gain of self-optimisation in data only scenarios is limited: on average 3.3%, the maximum value is 16.6% which corresponds to the scenario with few differences in average signal strength among users. Based on the

Page 74: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 74 (135)

simulation results, we have obtained the optimal value of zeta for different video telephony load percentages and for different performance targets for video and data. If the parameter settings of the scheduler were fixed, ζ = 0.75 (and α = 0.01) would be the best choice. This choice is fairly robust to the studied variations. Like in the data only scenarios, we compare this case with the case in which the scheduler is self-optimised. The gain in terms of maximum supportable cell load of self-optimisation with respect to the fixed setting was quantified for six combinations of performance targets, considering targets that apply either for calls in the whole cell or at the cell edge and four video telephony load percentages. Due to space limitations we do not show the results here, but it can be said that the potential gain of self-optimisation in mixed traffic scenarios is limited: on average it is 4.6% and the maximum gain is 20%. This maximum value corresponds to the case with 25% video and 1% packet loss and 250 kbit/s throughput as performance target for respectively video and data for calls in the whole cell. If we consider that a practical implementation of the self-optimised scheduling algorithm would not be able to apply the optimal parameter settings in every situation, the gain would be lower.

8.2.4 Algorithmic Approach

Considering the conclusion that the estimated potential for self-optimisation of packet scheduling parameters is not very large, no algorithmic development has been pursued within SOCRATES.

8.2.5 Results

Since the estimated potential appeared to be low and, therefore, no self-optimisation algorithms were developed for packet scheduling parameter optimisation, no further results are reported beyond those associated with the sensitivity analysis presented above.

8.2.6 Conclusion

Besides the key conclusion that we have observed no significant potential for the development of self-optimisation solutions, we need to add some notes to put this conclusion in proper perspective.

We first note that the observations and conclusions are based on a given reference packet scheduler, which we believe to be a typical and feasible implementation. As also noted before, this packet scheduler is characterised by an inherent adaptivity with respect to some channel and traffic characteristics. It is not unthinkable, however, that network vendors implement a packet scheduler with less (or different) inherent adaptivity, e.g. for reasons of implementation complexity. For such a packet scheduler a self-optimisation layer may then of course be more beneficial.

Furthermore, the potential of self-optimisation has effectively been investigated at a specific timescale, viz. the timescale at which the average scenario characteristics vary (e.g. average flow size, average service mix). At that timescale, the observed potential was very limited. What is still open for further research is to what extent self-optimisation at a finer timescale could be beneficial, where scheduling decisions respond to instantaneous scenario characteristics, e.g. the current number of real-time and non-real-time calls rather than the medium-term average service mix. With such swift adaptivity, however, it can be debated whether we are still dealing with self-optimisation or just a very advanced packet scheduler.

8.3 Handover Parameter Optimisation

8.3.1 Description of the Use Case

Handover is one of the key procedures for ensuring that the users can move freely through the network while staying connected and being offered appropriate service quality. Since the handover success rate is a key indicator of user satisfaction, it is vital that this procedure happens as timely and seamlessly as possible (the users will remain connected and packet loss will be minimal). In order for this to happen, it will sometimes be necessary to alter the handover parameters on a cell basis to account for regional differences between the cells which are caused by the cell environment, i.e., the building density, regional user behaviour, antenna location, etc.

In currently deployed mobile networks, handover (HO) optimisation is done manually over a long timeframe, e.g., days or weeks, on a need basis only. This approach is time consuming and may not be carried out as often as needed. By introducing an online self-optimising algorithm that will tune the control parameters of the HO process (handover parameters), we aim at overall network performance and user QoS improvement. The handover parameter optimisation algorithms, that have been developed in this use case group, adapt the handover parameters in fixed optimisation intervals based on the observed handover performance. The handover performance can be assesed by using various measurements and system metrics which are given in Section 8.3.2. We followed different approaches of identifying the current handover performance for the suggested handover parameter optimisation algorithms. The pros

Page 75: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 75 (135)

and cons of the individual solutions will be discussed in the conclusions section. The main goal of all handover optimisation algorithms is to reduce the handover failures (HOF), i.e., the HOs that are initiated but not carried out to completion, the ping-pong handovers (HPP), i.e., repeated back and forth HOs between two eNodeBs, and radio link failures (RLF), i.e., connection loss of a user to the network.

The first goal of the investigations in handover optimisation was to analyse the sensitivity of the network to handover parameter changes. This analysis is given in Section 8.3.3. Based on this analysis we developed concepts for handover parameter optimisation and developed three optimisation algorithms, i.e., the trend-based handover optimisation algorithm (TBHOA), the simplified trend-based handover optimisation algorithm (sTBHOA) and the HPI-sum based handover optimisation algorithm (HSBHOA). The individual algorithms will be described in detail in Section 8.3.4 and the simulation results of the optimisation algorithms will be given in Section 8.3.5.

8.3.2 KPIs and Control Parameters

There are three KPIs used for the assessment and evaluation of the handover optimisation algorithms, as presented below. The KPIs for this use case are called handover performance indicators.

Handover Failure Ratio

The handover failure ratio (HPIHOF) is the ratio of the number of failed handovers (NHOfail) to the number of handover attempts. The number of handover attempts is the sum of the number of successful (NHOsucc) and the number of failed handovers:

   

A handover fails when the user tries to connect to the TeNB but the SINR is not good enough to sustain a connection. The UE will then try to handback to its SeNB.

Ping-Pong Handover Ratio

If a call is handed over to a new eNodeB and it then returns to the source eNodeB in less than the critical time (Tcrit =5s) this handover is considered to be a ping-pong handover.

The ping-pong handover ratio (HPIHPP ) represents the number of ping-pong handovers (NHPP) devided by the total number of handovers, i.e., the number of ping-pong handovers (NHPP), the number of handovers where no ping-pong occurs (NHnPP) and the number of failed handovers (NHOfail):

 

Radio Link Failure Ratio

The radio link failure ratio (HPIRLF) is the probability that a user looses the connection to an eNodeB, if the user moves out of coverage (SINR < -6.5dB). It is calculated as the ratio of the number of radio link failures (NRLF) to the number of calls that were accepted by the network (Naccepted):

 

A handover is initiated if two conditions are fulfilled: the Reference Symbol Received Power (RSRP) of a cell is greater than the RSRP of the connected cell plus the hysteresis value and this condition holds at least for the time specified in the time-to-trigger parameter. Hence, the hysteresis and time-to-trigger are the important control parameters for the handover performance and will be tuned by our optimisation algorithms. The considered parameter values are given below.

Hysteresis

In our simulations, the hystersis values vary between 0 dB and 10 dB with steps of 0.5 dB, resulting in 21 different hysteresis values.

Time-to-Trigger

The time-to-trigger values for LTE networks are specified by 3GPP (see [5], Section 6.3.5). The values are (0 0.04 0.064 0.08 0.1 0.128 0.16 0.256 0.32 0.48 0.512 0.64 1.024 1.280 2.560 5.120) in [s]. Hence, these 16 values are the considered time-to-trigger (TTT) values in our simulations, which results in 336 different control parameter combinations of hysteresis and TTT. We will call a combination of a hysteresis and TTT value a handover operating point (HOP) from now on.

Page 76: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 76 (135)

8.3.3 Observability and Controllability

For the controllability and observability studies 336 simulations for all defined HOPs (combinations of the two control parameters) have been carried out. In these simulations all cells selected the same HOP. The goal of these simulations was to analyse the handover performance in the complete optimisation space and to determine the sensibility of the network to handover parameter changes. The simulation results showed the expected behavior for the individual HPIs, i.e., the HPIHPP was low for high handover parameter settings and the HPIRLF was low for small handover parameter settings. The HPIHOF showed a high peak (bad performance) for small handover parameters and an increase for higher handover parameters. This behaviour can only be explained by the detailed user mobility that has been taken into account including the braking and acceleration of users (cars) in front of traffic lights. Users that stop at these traffic lights experience a certain eNodeB constellation for a longer time which is affected by antenna beams along the street canyons. This special eNodeB constellation dramatically changes after the cars accelerate again which might lead to handover failures.

The following handover performance weighting function combines the three HPIs in one handover performance value and allows to influence the importance of the individual HPIs:

1 2 3

where HP is the resulting handover performance and wx is the weight of the individual HPIs. The values for these weights are directly translated from the operator policy. The operator of a mobile network can thus influence the performance of the handover algorithm by manipulating the weighting parameters. A combination of [w1 = 1, w2 = 0.5, w3 = 2], e.g. gives priority to the reduction of RLFs, while HO failures are to be avoided but an increase in ping-pong handovers is tolerated as inevitable side effect of the RLF reduction. Figure 34 shows the handover performance HP for these weighting parameter settings in the optimisation space. To increase the readability HP is normalised on the highest HP value in the optimisation space and the TTT values are plotted in a logarithmic scale. A ditch of low HP values is noticeable lying in a circular shape around the HOP with a hysteresis of 0 dB and a TTT of 0 s. This ditch was observed in several simulation scenarios using a variety of simulation parameters. Hence, the handover optimisation algortihms have to drive the HOPs of the individual cells towards this area of good handover performance.

Figure 34: Handover performance HP (weights: [w1 = 1, w2 = 0.5, w3 = 2])

8.3.4 Algorithmic Approach

Three HO optimisation algorithms have been developed based on the results of the observability and controllability (O&C) studies. The trend-based handover optimisation algorithm (TBHOA) uses HPI performance trends that have been derived from the O&C studies to optimise the handover performance of the individual cells. A simplified version of this algorithm, called the simplified trend-based handover optimisation algorithm (sTBHOA), was provided for integration proposes with different use cases to ease the implementation in other simulation environments. Simulation results for this optimisation algorithm can be found in Section 9.2. The HPI-sum based handover optimisation algorithm (HSBHOA) directly uses the HP for the optimisation and is, thus, less vulnerable to temporary high HPI values. The optimisation algorithms are described in detail below.

0

2

4

6

8

10

0

0.1

0.250.5

12

50

0.5

1

Hysteresis [dB]Time-to-Trigger [s]

Han

dove

r P

erfo

rman

ce (H

P)

Page 77: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 77 (135)

8.3.4.1 Trend-based Handover Optimisation Algorithm

The flowchart of the trend-based handover optimisation algorithm is shown in Figure 35. The optimisation is based on the HPIs which are collected and analysed by the TBHOA. The individual HPIs are compared to predefined thresholds to check the current handover performance. Initially, these thresholds are set to 5% for the handover failure ratio, 10% for the ping-pong handover ratio and 2.5% for the radio link failure ratio. These settings assume an operator policy that results in weighting parameter settings of w1 = 1, w2 = 0.5 and w3 = 2. Given the fact that the operator would give the reduction of radio link failures the highest priority the target threshold for this HPI is the lowest at the beginning of the optimisation process. The target thresholds are decreased by 33% if all the HPIs stay below their target thresholds for a certain amount of time, called the good performance time (default value 30s), since a good handover performance is detected in this case.

If at least one of the HPIs overshoots its performance target threshold for a certain amount of time and hence the bad performance time is reached (default value 10s), it is checked if the handover performance can be optimised. In the case that the HPIHOF and the HPIHPP exceed the target thresholds or in the case that only the HPIRLF exceeds the target threshold an optimisation is possible. In this case the handover operating point of the cell is changed according to the criteria given in Table 5. The optimisation criteria have been derived from the O&C studies and aim at steering the HOPs towards a more optimal performance for the individual HPIs. Note that the hysteresis value as well as the time-to-trigger value are changed by one step per handover parameter optimisation only. Hence, only neighbouring HOPs are considered for a HOP change.

If the HPIRLF and one of the other two HPIs exceed their target thresholds the optimisation actions would counteract each other since the criteria would steer the HOP in different directions. In this case the handover performance target thresholds are increased by 50% again.

Figure 35: Trend-based handover optimisation algorithm

Page 78: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 78 (135)

Table 6: Optimisation criteria for the HPIs

HPI Hysteresis Time- to-Trigger Optimisation

Handover failure ratio

< 5 dB ↑ TTT

5 dB – 7 dB ↑ TTT & ↑ HYS

> 7 dB ↑ HYS

Ping-pong handover ratio

< 2.5 dB ↑ TTT

2.5 dB – 5.5 dB ↑ TTT & ↑ HYS

> 5.5 dB ↑ HYS

Radio link failure ratio

> 6 dB > 0.6 s ↓ TTT & ↓ HYS

<= 6 dB > 0.6 s ↓ TTT

> 7.5 dB <= 0.6 s ↓ TTT & ↓ HYS

3.5 dB – 6.5 dB <= 0.6 s ↑ HYS

< 3.5 dB <= 0.6 s ↑ TTT & ↑ HYS

8.3.4.2 Simplified Trend-based Handover Optimisation Algorithm

The simplified trend-based handover optimisation algorithm (sTBHOA) is based on the handover optimisation algorithm that was described in the previous section and its flowchart is depicted in Figure 36.

Figure 36: Flowchart of the simplified trend-based handover algorithm

Page 79: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 79 (135)

The handover performance indicators (HPIs) are collected for a certain time called the HPI averaging window. The optimisation interval of the sTBHOA should coincide with the HPI averaging window to assure that the gathered statistics are not influenced by previous handover parameter settings. If the observation time for the current settings is not larger than the optimisation interval no optimisation action is carried out. If the observation time is larger than the optimisation interval the HPIs ping-pong handover ratio (HPI_PP) and handover failure ratio (HPI_HOF) are compared to the predefined thresholds. If these HPIs perform better than the thresholds the HPI radio link failure ratio (HPI_RLF) is compared to its predefined threshold. In the case that all HPI performances lie below the thresholds the HPI thresholds are decreased. The degradation factor for the HPI thresholds influences the adaptation speed of the handover algorithm. The defaults value is set to 33%, i.e., the HPI thresholds are decreased by a factor of 3. If the HPI_RLF performs worse than the threshold the hysteresis and TTT for the corresponding cell are decreased. According to the more detailed description of the trend-based handover algorithm in Figure 35 the hysteresis is decreased by 0.5 dB and a lower TTT is selected from the TTT list in this case.

If the HPI_PP and the HPI_HOF perform worse than their thresholds and the HPI_RLF performs better than its threshold the hysteresis and TTT are increased. Again the hysteresis is increased by 0.5 dB and a higher TTT is selected from the TTT list. In both cases, i.e., increasing and decreasing the hysteresis and TTT, it is not allowed to select values below or above the minimum and maximum values for the corresponding parameter settings. In the case that the values have to be adapted no optimisation is carried out. If all HPIs perform worse than the predefined thresholds the HPI thresholds are increased. The default value for increasing the HPI thresholds is 50% again.

8.3.4.3 HPI-Sum Based Handover Optimisation Algorithm

The HPI-sum based handover optimisation algorithm also changes the handover operating points of individual cells based on their current handover performance. The difference from the two previous approaches is that the individual HPIs are not compared to thresholds anymore but the weighted sum HP, introduced in Section 8.3.3, of neighbouring HOPs is optimised by the HSBHOA. The weighting factors are set to w1 = 1, w2 = 0.5 and w3 = 2 again favouring the reduction of RLFs. The general idea is to evaluate the HP for a HOP and compare it to the HP experienced in the last HOP, decide for an optimisation direction based on the outcome and finally change the HOP in that direction.

In order to be able to select a new HOP by switching the optimisation direction, the range of considered HOPs has to be limited to a subset of the 336 HOPs that have been introduced before. Based on the O&C studies and the arrangement of the ditch in the HP shown in Figure 34 the considerd HOPs are limited to a diagonal line in the handover operating space. The studies showed that a HOP with good handover performance can be found on every diagonal line in the handover operating space. The HOPs that will be used in the individual integration use case groups will be given in the corresponding sections. The limitations for choosing a diagonal line of valid handover operating points are the following:

All time-to-trigger values have to be considered for the optimisation. At least 16 different hysteresis values have to be considered. The valid handover operating points have to lie on a diagonal line in the handover operating

space, following a straigt line or a step function in this space. A flowchart of the HSBHOA is presented in Figure 37. In the beginning of the optimisation HOPs and optimisation directions have to be set for all cells as starting condition. Optimisation actions are carried on in fixed optimisation intervals to avoid the consideration of outdated HPI data. In these optimisation intervalls the HP is compared to the HP of the last HOP (HPLAST). If the HP of the current HOP is lower a new HOP is selected in the same optimisation dierction. Otherwise, the optimisation direction is changed and a new HOP is selected in that direction.

Page 80: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 80 (135)

Figure 37: HPI-Sum based handover optimisation algorithm flowchart

8.3.5 Results

8.3.5.1 Trend-based Handover Algorithm

The results of an initial system simulation are shown in Figure 38. For this simulation a low HOP was chosen for all cells in the network as starting condition. As expected the HPIHPP shows a bad performance in the beginning of the simulation. The TBHOA continuously lowers the HPIHPP after some time and thus increases the handover performance of the network. Nevertheless, it can be observed that the HPIRLF increases as well towards the end of the simulation. However, the TBHOA improves the handover performance of the network.

Figure 38: HPIs using the TBHOA with low initial HOPs

The operating point with hysteresis of 6 dB and a time-to-trigger of 320 ms is chosen as the starting HOP for all cells in the network for the next simulation results. The handover performance of a network with this fixed operating point will be compared to the handover performance of the optimised network. The simulation results for the fixed operating point with a hysteresis of 6 dB and a time-to-trigger of 320 ms

Start

Choose HOPs& opt. direction

Collect HPIsObservation

time >Opt. interval

HP = Σ wx HPIxHP > 0

No Optimisation

No

Yes

Compare HP with HPLAST

Change opt. direction

Keep opt. direction

Worse Better

Change HOP

Yes

No

Save HP as HPLAST

100 200 300 400 500 600 700 800 900 10000

10

20

30

40

50

60

70

80

Simulation Time [s]

HP

I rat

io [

%]

Handover failure ratioPingPong handover ratioRadio link failure ratio

Page 81: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 81 (135)

are depicted in Figure 39. The pink line shows the radio link failure ratio whereas the blue line gives the ping-pong handover ratio. The handover failure ratio is zero for the complete simulation time. This is because the selected handover operating point gives a very good handover performance for the complete network. The ping-pong handover ratio almost reaches 16 % which leads to a high amount of signalling overhead in the network. The handover performance for the optimised network is depicted in Figure 40. The system performance is the same for the first 900 seconds simulation time. After this time the tuning of the handover parameters increases the handover performance and hence avoids the increase in ping-pong handovers. Note that the starting HOP is one of the best HOPs that has been found in the system simulations.

Figure 39: HPIs using with fixed HOPs

Figure 40: HPIs using the TBHOA

The simulation results show that the TBHOA optimises the handover performance. Besides the shown simulation results the algorithm has been tested in various simulation scenarios and showed similar performance.

8.3.5.2 HPI-Sum Based Handover Optimisation Algorithm

The results of a system level simulation using the HSBHOA are shown in Figure 41 and Figure 42. For the reference simulations a fixed HOP with a hysteresis of 6 dB and a TTT of 320 ms has been selected

100 200 300 400 500 600 700 800 900 10000

2

4

6

8

10

12

14

16

18

Simulation Time [s]

HP

I rat

io [

%]

Handover failure ratioPingPong handover ratioRadio link failure ratio

100 200 300 400 500 600 700 800 900 10000

2

4

6

8

10

12

14

16

18

Simulation Time [s]

HP

I rat

io [

%]

Handover failure ratioPingPong handover ratioRadio link failure ratio

Page 82: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 82 (135)

since this is a HOP with a very good handover performance. The HSBHOA starts with a HOP of 7.5 dB and a TTT of 1.024 s. Figure 41 shows the progress of the radio link failure ratio. Obviously the HPIRLF shows a higher value in the beginning of the simulation for the HSBHOA. The optimisation algorithm decreases the number of radio link failures and outperforms the reference case with a fixed HOP in the end.

Figure 41: Radio link failure ratio using the HSBHOA

The progress of the HPIHPP is depicted in Figure 42. The reference simulation shows a higher amount of ping-pong handovers over the complete simulation time. In the end of the simulation the ping-pong handovers increase for the HSBHOA as well. This can be explained by optimisation actions of the algorithm that reduces the hysteresis and TTT in the network.

Figure 42: Ping-Pong handover ratio using the HSBHOA

The simulation results show that the HSBHOA optimises the handover performance of the network. The HSBHOA has proven to be less vulnerable to temporary high increases in HPI statistics. More simulation results for this algorithm can be found in Section 9.1.

8.3.6 Conclusions

The results of the investigations in the handover parameter optimisation use case show that an optimisation of the handover parameters is beneficial for the overall system performance and that

100 200 300 400 500 600 700 800 900 10000

2

4

6

8

10

12

14

16

18

Simulation Time [s]

HP

I rat

io [%

]

Radio link failure ratioReference radio link failure ratio

100 200 300 400 500 600 700 800 900 10000

2

4

6

8

10

12

14

16

18

Simulation Time [s]

HP

I rat

io [

%]

Ping-Pong handover ratioRef. Ping-Pong handover ratio

Page 83: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 83 (135)

optimisation algorithms can operate based on the defined measurements (HPIs). The proposed optimisation algorithms have been tested in different simulation environments and showed a significant improvement of the handover performance in all scenarios. Furthermore, the simulations showed that handover parameter optimisation should rather be done in the order of minutes than in the order of seconds. This is because a reasonable amount of handover events has to be observed before the effect of handover parameter changes can be evaluated. In addition, the proposed handover parameter optimisation algorithms can be controlled by an operator (operator policy) to adjust the optimisation strategy to the operator needs.

8.4 Load Balancing

8.4.1 Description of the Use Case

The load balancing (LB) use case group aims at developing methods and algorithms for network self-optimisation, with the goal to spread the traffic amongst neighbouring cells. In a mobile communication network with non-uniform user deployment distribution, heavily loaded cells where radio resources (i.e., resources blocks (RB)) utilisation is close to or achieves 100%), may be in the neighbourhood of lightly loaded cells. For some of the users that are located within a highly loaded cell, the required quality cannot be achieved and resources of lightly loaded neighbouring cells are wasted at the same time. Therefore, the goal of LB is to achieve a better distributed balance of load between cells, by handing over some of the users from the overloaded cell to lightly loaded neighbour cells.

In case of traffic overload, the eNB(s) that serve(s) the overloaded area may reduce the resources allocated to some dedicated users that are, for example, required for video conference, because these users have to share resources with a number of users with higher priority (e.g., voice call). In this situation QoS requirements cannot be met, but using LB functionalities, part of the load may be shifted to neighbouring cells, and dismissed resources may be reallocated to users that need it to achieve the required quality for the service they use.

To achieve the LB goal, a larger cell overlap is desired and coverage modifications may be needed. Artificial overlaying may be achieved by increasing the HO offset between cells, as shown in Figure 43. An additional HO offset virtually shrinks the overloaded cell and extends the coverage area of less loaded neighbour cells. Users are forced to HO between the Serving eNB (SeNB) and the Target eNB (TeNB) regardless of the optimal radio condition but due to the overload situation.

Page 84: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 84 (135)

Figure 43: Load Balancing approach

Shifting load from the overloaded parts of the network to the lightly loaded cells can improve network quality metrics that are of importance for operators, for example, Quality of Service (QoS), Grade of Service (GoS), total number of served connections, and resource utilisation.

The LB functionality requires exchange of load status information between neighbouring cells that are potential targets. These reports need to be transferred over an existing interface like the X2. The LB algorithm is also based on UE measurements, which are reported periodically or on eNB demand. According to the operators' policies and preferences, rapid changes in the network settings (like the increase or decrease of transmission power) should be avoided, and the LB algorithm should have a minimum impact on the network settings apart from redistributing the load. These constraints have also been verified based on simulations.

8.4.2 KPIs and Control Parameters

One of the key metrics for an operator is the number of satisfied users that can be accommodated within a given area. LB targets to increase this metric by shifting load from overloaded cells to other or intra-frequency neighbour cells. The focus of the investigations that have been conducted within SOCRATES is on the option of intra-RAT LB. As LB is achieved by means of HOs, HO performance should also be observed in the impacted area, by means of HO specific KPIs.

8.4.2.1 Load Balancing KPIs

Number of Unsatisfied Users due to Resources Limitation

Assuming that LB is not working due to fast fading, investigations can be based on long-term average values, and the long-term average number of satisfied users can be used as assessment criteria. While throughput is a good metric for best effort users, for other services it is better to consider the concept of user satisfaction, which can be measured based on a certain data rate requirement.

A mathematical framework to derive the average number of satisfied users using the concept of virtual load is explained in detail in [35]. Based on the long-term SINR conditions of the users before and after applying LB, and a given average data rate requirement Du per user u, the throughput mapping R(SINRu) as a data rate per physical resource block (PRB) and given SINR is calculated (e.g., based on the concept of a truncated Shannon-Gap mapping curve) and is related to the number of available PRBs MPRB. This results in the virtual cell load that can be expressed as the sum of the required resources of all users u connected to cell c by connection function X(u) which gives the serving cell c for user u.

SeNB TeNB

Hysteresis

Distance

SeNB TeNB

SeNB TeNB

LB HO offset

Hysteresish1

h2

Page 85: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 85 (135)

|

All users in a cell are satisfied as long as  1. In a cell with   1 we will have a fraction of  

satisfied users. For instance, virtual load of 300% would mean that only 33% users in that cell are satisfied.

Finally, we define a network-wide metric which is the total number of unsatisfied users in the whole network (which is the sum of unsatisfied users per cell, where number of users in cell c is represented by Mc). This can be written as

max  0, · 11

Note that the max operator is required in order to avoid a negative number of unsatisfied users in a cell in cases where the cell is not fully loaded.

Number of Unsatisfied Users due to Transmission Power Limitation

The limiting factor for the UL transmission is the UE’s maximum allowed transmission power (Pmax), therefore it is possible that a user is not able to meet the required QoS (e.g., throughput), although the PRBs are available. Considering Pmax and the uplink target received power per PRB, the number of granted resources might be insufficient and the user will be unsatisfied (not because of the limited resources in the cell but due to UE transmission power limit). The overall number of unsatisfied users due to power limitation, can be expressed as the sum of users u in every singular cell with number of required PRBs higher than maximum allowed , .

,

|

Call Dropping Ratio

Call dropping may happen in case of unsuccessful HO or link failure. Due to simulation tool implementation limitations only the total number of link failures is calculated. Radio link failure takes place if the radio link condition is below minimum requirements. For the simulation we assume that connection is dropped if the SINR level is below -6.5dB for a certain amount of time (Tqout=1s ).

Total Number of HOs

Not only HOs caused directly by LB functionality but also standard HOs (the number of HOs executed due to the radio condition might increase or decrease due to the cell borders changes applied by LB) require additional signalling overhead and increasing HO numbers is not desired effect and not in line with the operator policy. However higher number of LB HO may has a positive impact on other KPIs but cannot be accepted by the operator. The total number of HOs includes standard HO, ping-pong HO and LB HO.

Ping-Pong HO

The ping-pong handover ratio is defined as a number of ping-pong handovers (N_HOpp) divided by the total number of handovers. A ping pong handover happens if call being handed over to a new TeNB is being handed back to the source SeNB in less than the critical time (T_crit= 1s).

8.4.2.2 Control Parameters

The LB algorithm begins by issuing an alarm signal when the load within the cell exceeds the predefined acceptable load threshold and operates mainly on load measurements and predictions of the load after LB operations. To achieve optimal results, the control parameters related to load thresholds need to be properly adjusted.

Maximum Accepted Load

The LB functionality should be triggered before the load level reaches 100% , in order to keep free resources for operational reasons or to avoid overload situations in advance. This parameter indicates the maximum load a cell can handle without going into a congestive state. The introduction of a lower threshold set at, e.g., 70% of the maximum load (just setting a flag indicating that load is increased and that the target value might soon reached) might be useful. This parameter is used as a trigger to start LB operations.

Page 86: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 86 (135)

Target Load Level

This parameter defines an optimal load level at the eNB for which it has been shown that the QoS and GoS parameters are satisfactory for the majority of the users and unwanted events (e.g., RLF, ping pong HO) are minimized. This parameter defines the target load level at eNB that should be achieved and not exceed after LB procedure. The target load level has to be adjusted for both SeNB and TeNB participating in LB process, where target load level at SeNB defines load level that should be achieved by load reduction and at TeNB defines load level that should not be exceeded by newly accommodated users.

8.4.3 Observability and Controllability

The impact of LB operations on networks performance can be observed by measuring defined KPIs. As we defined assessment criteria in the previous section, in this section we investigate which control parameter variations will have an impact on which assessment criteria and to what extent. The controllability study serves as an input to the algorithm, giving a better understanding of the interactions and interdependencies between the applied changes to control parameters and observed performance changes in the network.

If one of following parameters is varied in a simulation, then the other parameters are set to the values mentioned below.

HO threshold (hysteresis) 3dB

Max HO offset 10 dB

Load to trigger utilisation of 100 % of RBs accessible at SeNB

Target load at SeNB 95% of RBs accessible at SeNB

Target load at TeNB 95% of RBs accessible at TeNB

Table 7: LB KPIs as a function of different control parameters settings; simulation assumptions: regular network grid, ISD 500m, 40users per cell (equally dropped) 400 users in hotspot (a group of

users concentrated in a small area), CBR traffic type 30 kbps

Control parameters settings

Unsatisfied Users Call drops Total number of HOs HO Ping-pong ratio Average number # # %

Reference 40.2 9590 29326 0.000

Hysteresis [dB]

1 8.1 5124 74879 0.461

3 1.9 10412 39481 0.170

5 1.1 18973 30003 0.072

Max HO offset [dB]

3 4.0 9989 36217 0.123

5 2.4 10308 38981 0.166

7 2.0 10396 39487 0.171

10 1.9 10412 39481 0.170

Load to trigger [%]

70 3.0 9577 64432 0.435

80 2.2 9793 56522 0.369

90 1.9 10185 48155 0.281

100 1.9 10412 39481 0.170

Target load at SeNB [%]

70 1.1 10909 38350 0.143

80 1.3 10636 38486 0.147

90 1.9 10412 39481 0.170

95 2.3 10258 39619 0.174

Target load at TeNB [%]

70 4.9 10714 44973 0.261

80 2.9 10518 42471 0.220

90 1.9 10412 39481 0.170

95 1.5 10374 38380 0.147

The simulation results in Table 7 show that LB improves network performance by reducing the number of unsatisfied users in the network in most of the simulations with different control parameter settings. A

Page 87: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 87 (135)

lower number of unsatisfied users can be achieved in singular cases but at the expense of other KPIs’ increase, like total number of HO or ping-pong HO ratio. Algorithmic Approach

LB is achieved by means of handing over users from the overloaded cells to cells which can accommodate additional load. As this HO needs to be based on detailed and up-to-date knowledge of the radio environment, it is performed by the eNB. Based on the knowledge of load situation at TeNB the SeNB needs to estimate the appropriate number of users that can be handed over to the TeNB. Users will generate a different load after the HO to TeNB compared with the load before the HO at SeNB, and this load should not exceed the load reported as available by the TeNB. The problem of limited resources at the TeNB can be solved by admission and congestion control mechanisms. This solution, however, may increase the number of rejected LB HO requests and increase the required time to achieve best load distribution through the LB functionality, and furthermore increase unnecessary signalling overhead. For LB purposes we therefore propose a method for prediction of the load required by the UE at the TeNB using SINR after LB HO estimation. The proposed method presents the main idea of SINR estimation and only includes available UE measurements like RSRP and RSSI, or eNB measurements like IoT. Another important issue to be solved by the LB algorithm is a decision on which neighbour cell is to be chosen as a TeNB, and how far HO offsets should be adjusted to achieve the highest gain in network performance. Two approaches were investigated, the first one with a focus on how to transfer load to the lowest number of neighbour cells, and the second one on how to redistribute load to a higher number of neighbour cells but with the focus to adjust the smallest number of HO offsets as possible. With the first approach, a smaller number of required target cells is needed but for heavier overloaded cells the algorithm rapidly moves the operating point closer to max HO offset value. High HO offset values may lead to worse load estimation after LB HO, which may jeopardise exceeding the allowed load level at the TeNB side, causing potential HO request rejection, and due to power limitation in UL it may lead to a higher number of RLFs. With the second approach high additional signalling overhead is expected since the SeNB needs to communicate with more target cells. However this algorithm distributes load much more equally over the TeNB and operates on lower HO offsets compared with approach 1. The second approach is better applicable for the users limited in UL power transmission as they do not need to increase their transmission power level so much and stay below maximum. Thus, the second approach has been chosen and applied in the following “LB HO offset setting” algorithm which is able to provide HO offset settings according to the load status in overloaded and neighbouring cells.

Page 88: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 88 (135)

require: List ‘L’ obtained from OAM of potential Target eNB (TeNB)for LB HO

1 collect measurements from users; RSRP to potential TeNB is reported, 2 group users corresponding to the best TeNB for LB HO (criterion is the difference between SeNB and TeNB measured signal quality), 3 get information from TeNB on available resources,

4 estimate number of required PRBs after LB HO for each user within LB HO groups

5 sort users U according to required HO offset regarding to TeNB

6 1i ; 0T

7 while max, )( TTLsizeiSeNBThldSeNB do

8 stepTT

9 L sort (TeNB according to #users allowed to LB HO with given T, descending order) {find target cell C with the largest number of potential LB HO for HO offset T}

10 while )(, LsizeiSeNBThldSeNB do

{in this loop ability of load accommodation by cells from list L is verified}

11 )(iLC {take next cell from sorted list}

12 estimate C and TSeNB , after HO for given T

{based on SINR after the HO prediction}

13 if CThldC ,ˆ then

14 TSeNBSeNBSeNB ,

{update load in overloaded cell by subtract handed over load}

15 TT uC ,

16 end if

17 1 ii

18 end while

19 end while

20 adjust to users U calculated HO offsets uCT , .

Figure 44: Pseudo-code for the DL Load Balancing algorithm.

where

L list of TeNB load at SeNB

threshold of maximum load at SeNB

C TeNB from the list L estimated load in TeNB after LB HO

threshold of maximum load at TeNB

HO offset for user u and TeNB

step HO offset step load at SeNB that will be reduced after LB HO

8.4.4 Results

We have proposed two algorithmic approaches for LB which are based on HO offset modifications. The goal of the proposed algorithms is to find the best way to estimate the load that can be accommodated by neighbouring cells and to estimate required HO offset level to transfer an adequate number of users. We have developed methods of SINR estimation after LB HO as a main part of load estimation mechanism.

SeNB

SeNBThld ,

C

CThld,

uCT ,

TSeNB,

Page 89: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 89 (135)

As we can see from the simulation results presented in Figure 45, the proposed algorithm can be very efficient. Generally, the number of unsatisfied users after LB decreases for each of the results presented below, as it was also observed during the simulations provided in Table 7.

The proposed LB algorithm is much better suited for managing overload situations in networks with lower Inter Side Distance where large overlapped areas allow LB algorithm to reallocate more users with lower HO offsets to the neighbouring, less loaded cells. Fluctuation over time of the unsatisfied users shows how the overload situation depends on the hot-spot position (close to the cell border or cell centre) as well as LB efficiency. In case of UL transmission beside the users that can be limited by lack of resources, there are also users limited by maximum transmission power, The users in power limitation are clearly influenced by the cell sizes, so the lowest KPI is observable in the scenario with ISD=500 and 30Kbit/s service (LB was able to reduce overload almost to 0 for UL transmission), the highest in scenario with ISD=1700m (gain from LB is still significant but part of the users stay as unsatisfied because are power limited).

Figure 45: Number of unsatisfied users due to limited resources (DL row), and number of unsatisfied users due to limited resources and power limit (UL row); simulation assumptions:

regular network grid, ISD 500/1700m, 40users per cell (equally dropped) 400 users in hotspot, CBR traffic type 30 kbps

8.4.5 Conclusions

The main objective of this use case was to develop and validate self optimising algorithm for load balancing in LTE network which will be able to redistribute load over the neighbouring cells, regarding to the current load status in overloaded and adjacent cells. Presented LB algorithm reallocates users from the overloaded cell to less loaded adjacent cells by the means of the additional offsets between them. Proposed SINR estimation and load prediction method allow this algorithm to calculate and adjust proper HO offsets and indicate users to be handed over. Simulation results show the network performance gain by the reduction of the number of unsatisfied users in DL as well as in UL transmission. Reduction of unsatisfied users means improvement of the network capacity but it has been achieved at the expense of the degradation of other observed KPIs (e.g., ping-pong HO). This is the trade-off that needs to be considered by the network operators, but until those changes fit in to the acceptable margins, the

0 200 400 600 800 1000 1200 1400 1600 18000

2

4

6

8

10

12

0.895

0.113

scenario 2

t [s]

n un

satis

fied

user

s in

net

wor

k

reference

with load balancing

0 200 400 600 800 1000 1200 1400 1600 18000

10

20

30

40

50

60

70

80

90

40.2

1.89

scenario 1

t [s]

n un

satis

fied

user

s in

net

wor

k

reference

with load balancing

0 200 400 600 800 1000 1200 1400 1600 18000

10

20

30

40

50

60

70

80

90

100

32.6

0.0175

scenario 1

t [s]

n un

satis

fied

user

s in

net

wor

k

reference

with load balancing

0 200 400 600 800 1000 1200 1400 1600 18000

20

40

60

80

100

120

140

160

180

150

68.4

scenario 2

t [s]

n un

satis

fied

user

s in

net

wor

k

reference

with load balancing

Page 90: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 90 (135)

operators in case of overload situations, should be able to increase network capacity only by proper load distribution.

8.5 Self-Optimisation of Home eNodeBs

8.5.1 Description of the Use Case

In LTE, so called home base stations, home eNodeBs (HeNBs), are expected to be used widely. HeNBs will create or extend coverage and/or capacity in the area where they are installed. The HeNBs will differ from macro eNBs in several aspects. For example, potentially there will be a large number of HeNBs. Further, a HeNB may have closed access, meaning that only User Equipments (UEs) belonging to the Closed Subscriber Group (CSG) of the HeNB can connect to it. The home eNBs will also use a lower maximum transmit power than macro eNBs, as they are meant to cover smaller areas than the macro eNBs.

Due to the small coverage areas, different handover settings compared to what is used in the macro network may be needed. This is because fast moving UEs may leave the HeNB coverage area quickly and it is therefore not always beneficial to hand over to the HeNB. Closed access will also lead to interference from the HeNB to the macro network, and vice versa, both in uplink and downlink.

As the HeNBs are installed by the customer, the deployment cannot be planned, and parameter settings cannot be optimised, by the operators. HeNB self-optimisation functionality will therefore be crucial. Such functionality would aim at maximising the gain of the HeNBs, in terms of increased capacity, coverage and quality in the area where they are deployed, while limiting the negative influence on the macro network.

For HeNBs, self-optimisation of HeNB interference and coverage, and self-optimisation of HeNB handover have been studied. The aim of the HeNB interference and coverage self-optimisation is to provide HeNB coverage while minimising the interference on the macro network. For this sub-use case, co-frequency, closed access HeNBs are considered, i.e., HeNBs operating on the same frequency as macro eNBs, but only allowing certain UEs to connect to it. This deployment set-up is selected because such HeNBs are foreseen to give the most challenging interference situation.

The aim of the HeNB handover self-optimisation is to provide seamless handover between HeNBs and from a HeNB to a macro eNB and vice versa. As HeNBs have small coverage areas, handover should preferably be performed only if the UE is expected to stay in the coverage area and the HeNB can provide better throughput than the macro eNB. The handover optimisation study performed within SOCRATES focuses on open access HeNBs. This was selected because an open access HeNB allows all UEs within the coverage area to connect to it. Hence, also UEs passing through the small coverage area of the HeNB, could connect to it, but this may not always be beneficial.

The results from the study of these two HeNB sub-use cases are presented here.

8.5.2 KPIs and Control Parameters

8.5.2.1 Interference and Coverage

Measurements

eNodeB measurements – To detect (problems related to) high interference in the network, measurements such as Call Dropping Ratio and Uplink or Downlink Received Interference Power may be performed by the HeNB and its neighbouring eNodeBs. HeNB measurements of the downlink interference require a downlink receiver function in the HeNB and aim at capturing the approximate interference UEs in the vicinity would experience from surrounding (Home) eNodeBs.

UE measurements – Measurements aiming at identifying (bad performance due to) high interference are Reference Signal Received Power (RSRP), Downlink Received Interference Power, Reference Signal Received Quality (RSRQ), Channel Quality Indicator, User Throughput, Packet Delay and Packet Loss Ratio. Further, the UE’s Power Headroom may be reported to the HeNB to assess whether a UE is power limited.

Control Parameters

The area served by a HeNB may be altered by increasing or decreasing the transmit power in uplink and downlink. This should however not be done without considering the effects on interference on surrounding eNodeBs and UEs. In order to limit the interference caused by the HeNB, shared and segregated spectrum methods can be used for example by assigning only part of the shared frequency band to HeNBs. The parameters specifying these methods may be used to control the interference.

Page 91: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 91 (135)

Another option is to use beamforming, but this is considered to have limited possibilities for HeNBs and is therefore not discussed further.

Downlink power – In order to change the area in which the HeNB is considered to be the best serving cell, the reference signal power and the synchronisation signal power should be changed. However, also control channels and traffic channels should reach the cell edge for UEs to be able to setup a session and achieve acceptable throughput. Assuming that the power of the physical channels are set relatively to the maximum downlink power such that they all cover approximately the same area, the total downlink power can be used as a coverage and interference control parameter.

Uplink power – The power settings on the physical uplink channels are set using power control, based on a combination of open-loop and closed-loop mechanisms [41]. The parameters specifying the control loops can be used as control parameters for coverage and interference. The maximum allowed uplink power and cell-specific nominal power components are other suitable control parameters.

Scheduling-related parameters – Parameters controlling the resources shared with macro eNodeBs may be used as control parameters for the interference, e.g., the borders of the sub band assigned to the HeNB.

8.5.2.2 Handover

Measurements

A number of measurements may be used as input to the HeNB handover optimisation. As for the interference and coverage case, the proposed measurements may be performed by UEs, HeNB and neighbouring eNodeBs. The physical layer measurements proposed for self-optimisation of handover are similar to those described above. In addition, information such as the UE history, performance counters and user mobility can be used for self-optimisation of handover:

UE history – The UE history information element, as standardised in 3GPP [42][43], is used to pass information from the source to the target handover cell regarding the (up to 16) most recently visited cells by the UE during the current connection. For each visited cell, the UE history provides the cell ID of the visited cell, the cell type (based on cell size), and UE’s dwell time in the cell.

Performance counters – The most useful counters for the handover optimisation are the number of triggered and successful handovers per source-target cell relation [44].

User mobility – Information on the user speed is a useful input to handover optimisation. Accurate speed may be difficult to obtain, but a split into a small number of different speed categories may be sufficient. Other mobility information, such as patterns of movement could also be considered. 3GPP distinguishes three different states for both handover and idle mode cell reselection [26][45]: high-, medium- and normal-mobility. The UE itself determines in what state it is by counting the number of cell reselections in idle mode, or the number of handovers in connected mode. However, this approach may not be sufficient to estimate speed in a network with varying cell sizes. Other methods to determine user mobility are based on the analysis of physical layer measurements and GPS information.

Control Parameters

The handover for the E-UTRAN is described in 3GPP standardisation by means of conditions to enter and leave different events [26]. Events A2 (serving becomes worse than threshold), A3 (neighbour becomes offset better than serving) and A5 (serving becomes worse than threshold1 and neighbour becomes better than threshold2) are of most relevance to the HeNB handover self-optimisation.

The handover process can be described as follows. The UE periodically measures the RSRP or RSRQ from the serving cell. The event A2 is entered when the RSRP or RSRQ measurement of the serving cell becomes worse than a threshold. With start of event A2, the UE starts measuring the RSRP/RSRQ from the neighbouring cells. With events A3 or A5, the handover can be executed to the neighbouring cell which satisfies the best the handover condition. The control parameters to be used are specified within the events A2, A3 and A5 [26]. Furthermore, their function in the handover self-optimisation is presented in Table 8.

Page 92: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 92 (135)

Table 8: Handover control parameters

Control Parameters Function

Hysteresis (Hys)

Avoid ‘ping-pong’ effects between the HeNB and eNodeB or other HeNBs and limit the number of UE reports due to insignificant measurement fluctuations.

Cell offsets (Ocs, Ocn) Favouring/discriminating a specific cell.

Frequency offsets (Ofs,Ofn) Favouring/discriminating a specific frequency.

General offsets (Off) Give an extra margin to account for effects not covered in the cell or frequency offsets.

Event thresholds (Threshold, Threshold1, Threshold2)

Set the allowed RSRP or RSRQ degradation for the different events.

8.5.3 Observability and Controllability

8.5.3.1 Interference and Coverage

One major issue when deploying co-channel closed access HeNBs is so called ‘dead-zones’, i.e., zones where UEs not part of the CSG of a HeNB (non-CSG UEs), cannot access the macro network due to interference caused by the HeNBs. The HeNB maximum transmit power, including the reference signal power, and the HeNB connected UE power have been identified as possible parameters to control the trade-off between HeNB coverage and the size of such dead-zones. In order to investigate how and to what extent different settings on these parameters affect the performance and the coverage - dead-zone trade-off, simulations have been run, varying these parameters.

The simulations were run in a static, Monte-Carlo simulator with hexagonal scenarios. In one of the cells, a grid of 10x10 houses was added, and HeNBs were placed in 10% of the houses. Non-CSG UEs were spread uniformly in the macro cells with no HeNBs. In the HeNB area, on average one CSG UE and one non-CSG UE was placed in each HeNB house in order to capture interference between macro and HeNBs and their connected UEs. Additional non-CSG users were spread out uniformly in the cell(s) containing the HeNBs, with a density resulting in, on average, the same amount of non-CSG UEs in all the macro cells. A CSG UE accessed the HeNB only when the HeNB Reference Signal Received Power (RSRP) was higher than the macro RSRP. Three different scenarios were considered. Two scenarios had a coverage driven deployment, both with a site-to-site distance of 1732 meters, while one had the HeNBs placed close to the closest macro eNB (A), the other had the HeNBs placed far away from the closest macro eNB (B), see Table 9. The third scenario was capacity driven with the HeNBs close to the closest macro eNB, see Table 9. For the different scenarios, power settings were varied from a minimum setting up to a maximum setting in steps of one dB, according to Table 10. When varying one of the parameters, the other was set to the maximum value as default.

Table 9: Scenarios used in the interference and coverage controllability study

Interference and Coverage Simulation Scenarios

Coverage Driven Scenario A

Coverage Driven Scenario B

Capacity Driven Scenario A

Site-to-site distance (m) 1732 1732 500

Macro-to-HeNB distance (m) 285 705 64

Page 93: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 93 (135)

Table 10: Control parameter settings for the interference and coverage controllability study

Interference and Coverage Control Parameter settings

Start value End (default) value

HeNB maximum transmit power 3 dBm 13 dBm

Maximum HeNB connected UE power 13 dBm 25 dBm

Figure 46, shows the results for varying the HeNB power for the Capacity Driven Scenario A. The left plot shows the ratio of CSG UEs within HeNB houses that can connect, i.e., detect the reference signal and get non-zero throughput in uplink and downlink, to HeNBs. It should be noted that CSG UEs that cannot connect to the HeNB may still be able to connect to the macro network. The right plot shows the ratio of non-CSG UEs within the HeNB houses that can connect to the macro eNB. The HeNB maximum transmit power given on the x axis is expressed in dB, as the difference from the maximum power setting. It can be seen that the number of CSG UEs that can connect to the HeNB increases and the number of non-CSG UEs that can connect to the macro network decreases with an increased HeNB power. The same trend is seen for all the scenarios. For the coverage driven scenarios however, with the HeNB further away from the macro eNodeB, the ratio of CSG users that can connect to the HeNB is close to one while the ratio of non-CSG users that can connect to the macro eNB is significantly lower, i.e. the dead-zone is larger.

For the studied scenarios, with a medium non-CSG user density and only one CSG user per HeNB house, it can be seen that the maximum uplink power for HeNB connected UEs has a small impact on the number of non-CSG UEs that can connect to the macro network, but a large impact on the number of CSG UEs that can connect to the HeNB. The results for the Capacity Driven Scenario A are shown in Figure 47, where the left plot shows the ratio of CSG UEs within HeNB houses that can connect to HeNBs and the right plot shows the ratio of non-CSG UEs within the HeNB houses that can connect to the macro eNB. The HeNB connected UE maximum power given on the x axis is expressed in dB, as the difference from the maximum power setting. The same trend is seen for all the scenarios, also here with a higher ratio of CSG users that can connect to the HeNB and a lower ratio of non-CSG users that can connect to the macro eNB in the coverage driven scenarios.

Figure 46: Ratio of UEs in the HeNB houses with RS and UL and DL throughput versus HeNB maximum power for Capacity Driven Scenario A

Page 94: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 94 (135)

Figure 47: Ratio of UEs in the HeNB houses with RS and UL and DL throughput versus HeNB connected UE maximum power for Capacity Driven Scenario A

The impact on achievable throughput for HeNB connected and macro connected UEs respectively was found to be rather small for both changes in downlink and uplink power. It could also be seen that the throughput was close to, or equal to, the requested throughput for all connected UEs.

Based on the results of the controllability study, it can be concluded that the HeNB power is a suitable parameter for controlling the trade-off between HeNB coverage and the size of the created dead-zones, since it has an effect on both the ratio of HeNB connected CSG UEs and on the ratio of connected non-CSG UEs. The HeNB connected UE maximum power only affects the ratio of HeNB connected UEs and is therefore not a very effective control parameter, but should rather be set to its maximum value

8.5.3.2 Handover

To assess how handover parameters affect the HeNB handover performance, simulations were run in a dynamic simulator, varying hysteresis and time-to-trigger (TTT), according to the values given in Table 11. A hexagonal scenario, with a site-to-site distance of 500 m and a row of houses containing HeNBs in one of the cells, was used. In the cells, UEs moved with a random start location and direction. One UE moved along the row of HeNBs and results were collected for this UE.

Table 11: Control parameter settings for the handover controllability study

Handover Control Parameter settings

Hysteresis (dB) 0 3 6 9 12

TTT (ms) 0 100 320 640 1280

Figure 48 shows the SINR and the serving cell for the considered UE when moving at a speed of 30 km/h. In the plots, each blue dot corresponds to a measured SINR value, while the red line shows the serving cell. Note that the same y axis is used for two completely different values, i.e., the SINR and the serving cell number. Cells number 3 to 12 are the ten HeNBs and cell 0, 1 and 2 are macro cell sectors, each carrying a load of 5 UEs per sector. Cell 1 is the sector in which the additional, deterministic UE is located. While the upper plot illustrates the situation when using the minimum settings on hysteresis and TTI, i.e., 0 dB and 0 ms, respectively, the lower plot illustrates the situation using a hysteresis of 12 dB and a TTT of 640 ms.

It can be seen that for low handover parameter values, the UE is connected to a HeNB most of the time and handover is often performed. For high handover settings, it occurs that UEs do not handover at all to some HeNBs, even when moving through their coverage area.

Page 95: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 95 (135)

Figure 48: SINR and serving cell for a UE moving along the row of HeNBs at a speed of 30 km/h

Changing the handover parameters also has an impact on SINR, and subsequently there will also be an impact on the throughput that UEs experience. In the following, different combinations of TTT and hysteresis settings and their impact on the UE throughput are evaluated for varying UE speed, distance from the macro eNodeB and macro network load.

Simulation results show that the impact of the handover parameters depends on UE speed as well as the distance from the macro eNB and the macro eNB load. Figure 49 shows the throughput for the considered UE, using different TTT and hysteresis settings for the same simulation snapshot. It can be seen that a low TTT and a small hysteresis gives higher throughput. The trends are more pronounced with higher UE speed. The same holds for a larger distance from the macro eNB, up until the point where the UE is so far away from the macro so that it is connected to HeNBs during most of the simulated time. At a speed of 30 km/h, varying the macro load considering 0, 1 and 5 UEs per sector respectively, no major difference in the trends could be seen. At a UE speed of 100 km/h, however, the impact of the macro load is more pronounced. At a macro load of 0 UEs per sector, throughput increases when the TTT is increased from 320 to 640 ms. This is due to the UE connecting to a macro cell with no load and with better SINR conditions. The same increase could not be seen at a higher macro load.

0 2 4 6 8 10 12 14-20

-10

0

10

20

30

40

Time (s)

Blu

e: D

ata

SIN

R (

dB)

/ R

ed:

Ser

ving

cel

l

Hysteresis = 0 dB TTT = 0 ms UE speed = 30 km/h Separation distance = 165 m Load = 5 UEs/sector

0 2 4 6 8 10 12 14-20

-15

-10

-5

0

5

10

15

20

25

30

Time (s)

Blu

e: D

ata

SIN

R (

dB)

/ R

ed:

Ser

ving

cel

l

Hysteresis = 12 dB TTT = 640 ms UE speed = 30 km/h Separation distance = 165 m Load = 5 UEs/sector

Page 96: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 96 (135)

Figure 49: Achieved throughput for a UE moving at 30 km/h, 165 m from the macro eNodeB with a macro load of 5 UEs/sector

Considering the results shown for UE throughput, both the TTT and the hysteresis should be set to their minimum values. However, also other aspects should be considered. Simulations have shown that the ratio of ping pong handovers is high for the lowest hysteresis and TTT settings. Considering both throughput and ping pong, it is found that the parameters should be set high enough to avoid ping pong, but apart from that as low as possible. If other aspects are also considered, such as call drops due to handover signalling failure in bad SINR conditions, it may be found that there are scenarios where it is best to use high handover parameter values, to completely avoid handing over to HeNBs.

8.5.4 Algorithmic Approach

8.5.4.1 Interference and Coverage

For the scenarios considered in the controllability simulations, it could be seen that a large portion of non-CSG UEs in the HeNB houses are unable to connect to the macro eNB, but those that do get connected can achieve a satisfactory throughput. Decreasing the HeNB downlink power increases the ratio of non-CSG UEs in the HeNB houses that can connect to the macro eNB, but also decreases the ratio of CSG UEs that can connect to the HeNB. Meanwhile, a customer installing a HeNB would probably do so in order to achieve better throughput and/or coverage and want the HeNB to serve all CSG users within the house where it is installed. A HeNB coverage and interference self-optimisation algorithm should therefore in the first place aim at providing HeNB coverage and in the second place minimise the size of the dead-zone for non-CSG users.

Based on the results given in the controllability study, and discussed above, a HeNB coverage and interference self-optimisation algorithm using the HeNB maximum transmit power as control parameter has been developed. In the algorithm, the HeNB power is set according to the following:

1. Set the power so that CSG UEs have coverage within a certain assumed house size.

2. Under the condition that CSG UEs have coverage within the assumed house area, set the power so that non-CSG UEs have macro coverage in the house area to as large extent as possible.

3. For home eNodeBs placed far away from any macro base stations, non-CSG UEs will not be able to detect the macro reference signal in the house even when the HeNB transmits with the lowest possible transmit power. In those cases the home eNodeB may use the maximum allowed power in favour of the CSG users.

A UE is assumed to have coverage when the achievable Signal to Interference and Noise Ratio, SINR, is higher than -6.5 dB. The power settings according to the principles given above are calculated based on the achievable Signal SINR using downlink RSRP and RSSI measurements performed in the HeNB and the estimates of the maximum and minimum pathloss from the HeNB within the assumed house size. The proposed method is similar to one of the methods proposed in [46], but considers closed access HeNBs and uses achievable SINR values as a base for the power setting. Further, situations when the minimum

129

6 3

0

0

100

320

640

1280

0

5

10

15

Hysteresis (dB)TTT (ms)

Thr

ough

put

(Mbp

s)

Page 97: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 97 (135)

allowed power limits the possibility to decrease the size of the dead zone are detected so that the HeNB only need to limit the power when this is sufficiently beneficial for macro UEs.

8.5.4.2 Handover

No algorithms were considered for the handover sub-use case. However, various recommendations were made on requirements for such an algorithm. Firstly, there is a trade-off between dropped calls and throughput. Connecting to a HeNB will often increase throughput, but also increases the risk of dropped calls. A self-optimisation algorithm should maximise throughput, while avoiding dropped calls.

Using cell individual offset (CIO) parameters is an appropriate way to manage handovers between cell pairs (involving HeNB), as hysteresis and TTT apply for handover from source cell to all neighbours. Simulation results are based on hysteresis, but CIO is an offset added to the hysteresis. However, optimising TTT to achieve a compromise between macro and HeNB performance is still an option. In addition, UE specific handover optimisation would likely give further benefits. Potentially this could use the standardised mobility states. As one of the objectives is to avoid dropped calls, it is important to avoid low SINR values as these can cause call drops during handover. As a first step, the algorithm should set the CIO (Cell Individual Offset) as low as possible, but not so low that ping-pong handovers occur. If dropped calls are still occurring, the CIO on the macro cell should be increased to a sufficiently high value such that the UE stays on the macro cell. For the HeNB to macro handover, the opposite will apply: the CIO should be set to a low value, so that the handover happens as quickly as possible.

8.5.5 Algorithm Results

8.5.5.1 Interference and Coverage

In Figure 50, simulation results of running the algorithm in an iterative manner, starting with medium HeNB power for all HeNBs and converging to the optimised, individual HeNB setting, by changing the power in steps of 0.5 dB. Note that without a self-optimisation algorithm it is likely that the HeNB power would be set to the maximum allowed value, and not a medium value, for all HeNBs, as default. The left plot shows the ratio of non-CSG UEs in HeNB houses that can connect to the macro network and the right plot shows the ratio of CSG UEs in the HeNB houses that can connect to HeNBs. The lines represent the results for three different scenarios, given in Table 9.

Figure 50: Ratio of UEs in the HeNB houses with RS and UL and DL throughput, versus the number of algorithm iterations for three different scenarios

It can be seen that for the scenario with HeNBs far away from the macro eNB, the ratio of non-CSG UEs that can connect to the macro eNB is slightly increased compared to the start power setting. This is without decreasing the ratio of CSG users that can connect to the HeNB significantly. For the two scenarios with the HeNBs closer to the macro eNB the desired power found in the algorithm is higher than the maximum allowed power for all HeNBs, resulting in all HeNBs using the maximum allowed power. This means that larger dead-zones are accepted in order to provide better coverage for CSG UEs.

8.5.6 Conclusions

The trade-off between the HeNB coverage and performance and the interference caused in the macro network is an important issue to tackle in order for closed access HeNBs to be feasible. In particular, the

Page 98: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 98 (135)

HeNB coverage and the size of so-called dead-zones need to be attended to. A suitable parameter to control this trade-off is the HeNB maximum transmit power. The second control parameter evaluated, the HeNB connected UE maximum transmit power, was found not to be suitable for controlling the size of the dead-zone.

The proposed algorithm firstly provides HeNB coverage within a house of a certain assumed, or provided, house size, and secondly minimises the size of the dead-zone caused in the macro network. The assessment of this algorithm shows that for HeNBs not located too close to the macro eNB, it is possible to set the home eNodeB maximum transmit power lower than the maximum allowed power and increase the area in the house where non-CSG users can connect to a macro eNB, while still providing HeNB coverage within the house. In a further development of the algorithm, priorities between HeNB coverage and macro coverage could be used as input in order to take operator policies into account. Further, additional control parameters could be evaluated, such as the CSG UE connection margin, HeNB frequency usage, and power settings on different frequency bands. The results found in the performed study would be useful input to such study, as the same trends can be expected also when using only parts of the frequency band.

An area with high density of open access HeNBs is challenging for handovers, especially in case of high speed UEs. The handover parameter settings have a large impact of the number of handovers performed when moving through an area with high HeNB density. Further, the SINR achieved, and hence also the throughput is highly dependent on the handover parameters. The impact of the settings varies significantly with the distance from the macro eNB, the UE speed, and the macro eNB load. When optimising the HeNB handover for rapidly moving UEs, two approaches are possible. Either the handover parameters could be set as low as possible in order to minimise the time that the UE is connected to the wrong cell, or they could be increased in order to avoid that the UE hands over to the HeNB at all. When setting the handover parameters low, it is important to still keep them above a level where a lot of ping pong handovers occur.

The study performed on the HO optimisation controllability considers high HeNB density and moving UEs. In further studies, it would also be of interest to consider static UEs, in particular effects on the number of ping pong handovers, and lower HeNB density, where subsequent handovers between macro eNB and HeNBs may occur. Also, as call drops are likely to occur due to failure of handover signalling at low SINR, this would be an interesting metric to consider together with the achievable throughput. Based on the results from the controllability studies, self-organising functionality controlling HeNB handover could be developed.

8.6 Cell Outage Management

The following section gives an introduction into cell outage management, with the focus on cell outage compensation for LTE.

8.6.1 Description of the Use Case

Outages in mobile cellular networks come in different degrees of significance and may be caused by different problems. The failure of a site, a sector or physical channel/signal can be due to, e.g., hard- or software failures (radio board failure, channel processing implementation error, etc.), external failures such as power supply or network connectivity failures, or even erroneous configuration. While some cell outage cases are detected by Operations & Management (O&M) system through performance counters and/or alarms, others may not be detected for hours or even days. It is often through relatively long term performance analyses and/or subscriber complaints that such outages are detected. Once detected, the underlying problems must be repaired, while significant coverage and capacity degradations may be experienced until the repair is completed. These largely manual operations are rather suboptimal in terms of the involved response time and the incurred performance and, potentially, revenue loss.

Cell Outage Management (COM) comprises mechanisms for both Cell Outage Detection (COD) and Cell Outage Compensation (COC), and is an integral part of the self-organizing network concept in E-UTRAN [36][37][38], with the objective to enhance the network robustness and resilience. Figure 51 depicts the different elements and workflow of COM in future cellular networks. The depicted example is characterized by a site outage, whose pre-outage service area is indicated in red. A variety of measurements, e.g., alarms, counters or key performance indicators, are gathered by the user terminals, the eNodeBs and/or the operations and maintenance (O&M) center, and fed to the COM algorithms (COD and COC). Fed with these measurements, the COD function then automatically identifies the occurrence and scope of an outage, and triggers both the COC function as well as the operator’s maintenance department for possible manual repair. The COC function translates its measurement input to compensation measures in order to automatically mitigate the incurred coverage degradations, by an

Page 99: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 99 (135)

appropriate adaptation of one or more control parameters (e.g., antenna tilt, power settings) in surrounding cells. Such compensation is governed by the operator-formulated policy regarding the trade-off of local and regional performance effects, and is characterized by an iterative process of radio parameter adjustment and evaluation of the performance impact until convergence is reached. Important feedback is provided by the so-called X-map estimation function (see Section 8.7), which processes measurements including location information in order to generate, e.g., coverage or performance maps. In the following we will concentrate on the COC function.

Figure 51: Overview of Cell Outage Management. This overview diagram contains examples that correspond with the approached employed in following sections

8.6.2 KPIs and Control Parameters

In principle all radio parameters that somehow affect coverage and the spatial aspects of capacity and service quality, are potentially relevant control parameters from a COC perspective. The promising control parameters include:

The DL power split between the common channel (incl. the Reference Signal) and the Physical Downlink Shared Channel (carrying user data) – By increasing the common channel power in cells surrounding an outage area, the service area of those cells can potentially be extended to cover all or part of the outage area. Since less power is available for actual user data transmissions, this coverage extension comes at a cost of a reduced traffic handling capacity and hence service quality experience, which stresses the involved trade-off.

The target received power density of Physical Uplink Shared Channel (carrying user data) – Uplink (UL) transmit powers are typically derived from a target received power density P0, in combination with a path loss compensation factor. A reduction of P0 lowers UL inter-cell interference levels and hence increases coverage. This coverage gain comes at a cost of reduced channel rates, increased cell loads and hence lower per-user throughputs.

Antenna parameters – Modern antenna design allows electrical adaptation of both the orientation of the main antenna lobe and the antenna pattern, e.g., via remote electrical tilt or beam forming techniques. As, e.g., the antenna tilt is known to be a highly responsive lever when it comes to shaping the cell footprint, it is a promising candidate for use in COC.

The continuous collection of measurements and analysis of radio parameters, counters, KPIs, statistics, alarms and timers, are an indispensable precondition for COM and is provided by the Performance Estimator. These measurements may be obtained from various sources, including the eNodeBs (cell loads, radio link/handover failure statistics, interference levels), the UEs (e.g., reference signal received power levels, failure reports) and the O&M system providing quality-based KPIs, e.g., the 10th UL (DL)

MeasurementsCOD

COC

OperatorPolicy

Control Parameters, e.g., tilt and P0

X-map estimation

O&M

Performance Estimator

Control Parameter Tuninge.g. QT,DL & QT,UL

e.g. QDL & QUL

Page 100: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 100 (135)

throughput percentile QUL (QDL). Key challenge is to perform real-time coverage estimation as part of the COC feedback loop. An option discussed in more detail in Section 8.7, is to generate coverage maps based on UE measurement reports in combination with positioning information (this usage of this approach is, however, not considered below).

Network performance comprises multiple conflicting aspects. At the highest abstraction level, a distinction may be made between factors impacting user experience, e.g., coverage, capacity and quality. A distinguishing characteristic of different network operators is how they choose to trade investment/operational costs for network performance, as well as what their desired trade-off is between the different performance aspects. This is captured by the operator policy. An example of an operator policy that is used below for the specific setting of an incurred site outage, is to maximize coverage given a minimum constraint on the 10th UL and DL throughput percentiles, i.e., QUL ≥ QT,UL and QDL ≥ QT,DL. An alternative objective may be to optimise some appropriately weighted sum of a multiple performance metrics. An operator policy may depend on location, time as well as on the operational state of a cell, e.g., whether it is in ‘normal’ state or involved in compensation actions of an outage in its vicinity.

8.6.3 Observability and Controllability

An extensive controllability study has been reported in [39], which assesses the compensation potential of several control parameters in different scenarios. Comparing the different scenarios, we note that the outage-induced performance degradations and also the (potential) compensation gains are larger for more heavily loaded scenarios, as long as service quality is not of insignificant relevance in the operator. This is due to the fact that for lower traffic loads, cells surrounding an outage area have more resources available to serve additional traffic. Furthermore, it is observed that the tilt and P0 are the most effective control parameters when the policy is primarily coverage-oriented, while optimisation of P0 is most effective when the policy is predominantly quality-oriented (which implicitly states that the uplink is typically the bottleneck). The results further showed that there is a tradeoff between coverage and quality, i.e., the user observed quality QDL and QUL decrease as the coverage increase and this needs to be handled by the operator policy as mentioned in Section 8.6.2.

8.6.4 Algorithmic Approach

In the algorithm development, the basic idea is to adjust UL power control parameters, e.g., P0, in the vicinity of the eNodeB or sectors in outage, in order to decrease the UL inter-cell interference and thereby to increase the Signal to Interference and Noise Ratio (SINR) for the UEs in the outage area which are power-limited. By appropriately decreasing P0, the UL coverage is thus improved and as such the outage area can decrease in size. On the other hand, by decreasing P0 the UL quality will be reduced, due to the decreased SINRs and the increased number of served users. Therefore, each cell in the vicinity of the outage cell(s) continuously measure the DL and UL quality indicators, e.g., 10th DL and UL throughput percentiles and effectively minimize P0 such that the UL and DL quality indicators exceed the respective outage quality targets QT,DL and QT,UL.

The basic approach works as follows and is executed for each compensating cell. The UL quality and DL quality in each cell are continuously measured, whereby P0 is decreased under the constraint that

, , , , ,

where P0,min is the minimum tolerable P0 (the estimation of P0,min is given below).

Clearly, there are several ways of setting P0 such that the above specified constraint is satisfied. In general, P0 is computed as a function of observed uplink and downlink quality over a time window, QDL and QUL respectively, outage target uplink and downlink quality QT,DL and QT,UL, current and historical P0 values, and an allowed P0 value range [P0,min, P0,max]. The particular solution employed hereafter is as follows. When both DL and UL throughput are above their respective outage quality targets, then P0 is reduced in order to increase coverage. Otherwise, i.e., if DL or UL throughput is below its corresponding outage quality targets, then P0 is increased. This is summarised in Figure 52, where P0 is the step by which P0 is decreased or increased. Further P0,max is the maximum P0 that is allowed.

The parameter P0,min is set such that UL SINR is sufficient for establishing a basic connection with the network. In general, the parameter P0,min is determined such that the UL SINR is higher or equal to a threshold which supports the lowest Modulation and Coding Scheme (MCS). Let SINRmin be the minimum tolerable UL SINR, based on, e.g., the lowest MCS.

Page 101: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 101 (135)

Figure 52 The UL power control tuning algorithm

The received signal power S at the base station must thus satisfy

· (linear scale)

where Imax is the maximum UL inter-cell interference that can be observed. Above gives that

·

The parameter Imax can be obtained using different methods, e.g., based on historic measurements of inter-cell interference, and/or simulations and/or predictions, and/or predictions of future traffic variations. Further, Imax can include a margin for unforeseen increases in inter-cell interference. The solution employed here is simply to find the maximum UL inter-cell interference during the last X minutes or even hours.

Alternatively, the antenna tilt of the neighboring cells can be adjusted in order to increase the coverage around the base station that has failed. By reducing the downtilt (i.e., uptilting), the coverage can be improved at the cost of lower quality in the neighboring cells. This is due to reduced spectral efficiency (higher path loss and higher inter-cell interference) as well as greater number of users now being served. A similar approach to P0-tuning can be employed whereby the quality is continuously measured and the downtilt is reduced such that a high inter-cell interference is avoided and that the quality meets the set requirements.

8.6.5 Results

The approach given above has been evaluated using a Monte Carlo-based LTE network simulator using a hexagonal layout of 19×3 cells. A capacity-driven network layout is assumed with an inter-site distance of 500 m. A coverage-driven layout of 2200 m has also been considered in the evaluations. Key system parameters are listed in Table 12 (largely based on [40]).

Table 12: Key system parameters

Capacity-driven layout Coverage-driven layout

Inter-site distance 500 m 2200 m

Antenna downtilt 15o 5o

System bandwidth 10 MHz

Max output power Base station = 46 dBm, UE = 25 dBm

Path loss 128.1 + 37.6 log10 r, with r in km

Shadowing standard deviation = 8 dB, inter-site correlation = 0.5, decorrelation distance = inter-site distance / 15

Antenna model 3GPP 3D model

Noise level -199 dBW/Hz in DL, -195 dBW/Hz in UL

Service Generic elastic data service with a requested throughput of 1 Mb/s (DL) & 250 kb/s (UL)

The data traffic model is characterized by a generic elastic data service with a requested DL and UL throughput of 1 Mb/s and 250 kb/s, respectively. A rate fair scheduler is used to distribute resources in UL and DL. The outage quality targets set to QT,UL = 64 kb/s and QT,DL = 128 kb/s, respectively.

if QDL(k) > QT,DL AND QUL(k) > QT,UL then P0(k) = max(P0,min,P0(k-1) - P0) else P0(k) = min(P0,max,P0(k-1) + P0)

Page 102: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 102 (135)

In the following we show the results where the inter-site distance is 500 m (capacity-driven layout) and P0 is tuned in order to alleviate the coverage holes caused by an outage. The results during high load are given in Figure 53 and Figure 54. An outage occurs at iteration 50, resulting in a reduction of number of served UEs and the creation of coverage holes, as shown in Figure 54. Further, the DL and UL quality is reduced, since users previously served by the failed cell now perform cell reselection to neighboring cells and this results in a further quality degradation, see Figure 53. Compensation starts at iteration 100, whereby P0 is gradually decreased, resulting in a reduction of mainly UL quality (see Figure 53) and in an increase in coverage and number of served users (see Figure 54).

Figure 53: Temporal behaviour of P0 tuning during high load, where outage occurs at iteration 50, followed by the initiation of compensation at iteration 100: (a) P0 of the compensating cells, where max, mean and min is computed over the compensating cells (b) Mean of the DL and UL quality

metrics computed over the compensating cells

100 200 300 400 500 600 700-105

-100

-95

COC Iteration

P0 (

dBm

)

Max

Mean

Min

z)

100 200 300 400 500 600 7000

0.1

0.2

0.3

COC Iteration

DL

Qua

lity

(Mbp

s)

Mean

Target

100 200 300 400 500 600 7000

0.05

0.1

0.15

COC Iteration

UL

Qua

lity

(Mbp

s)

Mean

Target

compensation

outage

Iteration (a)

Iteration

Iteration

(b)

Page 103: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 103 (135)

Figure 54: P0 tuning during high load: (a) outage situation with no compensation (iteration 50-99), and (b) snapshot of outage situation with largely converged compensation (iteration > 200)

Simulation studies show that automatic compensation using P0 is feasible in a capacity-driven layout. The compensation potential in terms of coverage improves as load decreases and can reach 85% user recovery during low load. This comes with a cost, namely a reduction of quality in the neighboring cells, where the quality degradation is most visible for high and medium load and can be as low as 50% of the quality experienced in a post-outage setting without any compensation. The approach has also been evaluated in a coverage-driven layout with 2200 m inter-site distance. The degree of compensation is, however, not as large as in the capacity-driven layout. The results from tilt tuning show that similar or better performance can be expected, depending on the load situation. In particular, higher compensation potential exists during medium and low load when using tilt tuning.

8.6.6 Conclusions

In conclusion, an overview of cell outage management has been presented as a self-organising functionality in LTE networks, followed by specific algorithmic solutions and evaluations to cell outage compensation. In the presented algorithm, the parameters of cells neighboring a failed cell/site can be automatically adjusted such that the coverage is maximized given constraints on quality defined in terms of cell-edge user throughout. Depending on the considered scenario in terms of, e.g., traffic loading, simulation results show that up to 85% of the users can be recovered at the cost of an acceptably reduced quality in the compensating cells.

8.7 X-Map Estimation

8.7.1 Description of the Use Case

The X-map estimation function is not a typical SON use case, but rather a support function for different SON use cases. It may be used throughout a number of optimisation activities, especially when addressing coverage. Thus, the structure of the X-map estimation section is different compared to the sections about the other SON use cases.

An X-map is a geographic map with overlay performance information where X can stand for different types of performance information. The goal of the X-map estimation function is to use the user equipments (UEs) in the network to report the observed service quality along with the locations where the measurements are taken. These UE reports can be used by the X-map estimation function which continuously monitors the network and estimates the spatial network performance, e.g., coverage and throughput. The advantages of the X-map function are that a larger sample of UE locations are available, the costs involved in drive/walk tests are reduced, and the network state is continuously tracked as the network and its environment evolve. Thus, the difficulties with drive/walk tests can be overcome, namely, that drive/walk tests cover only a limited (outdoor) part of the network due to access restrictions and due to the time and costs involved and that only a snapshot in time of the conditions in the field is captured.

The X-map estimation function enables a network operator to continuously monitor the network and optimise the prediction tools. These prediction tools produce information about the radio coverage which

(a) (b)

Failed Cell

Compensating Cell

Operational Cell

Covered Area

Coverage Hole

Compensated Coverage Hole

Page 104: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 104 (135)

is essential for network planning, network optimisation, and radio resource management (RRM) parameter optimisation. The information embedded in an X-map may also be used by a Self-Organising Network (SON), especially in functionalities that address optimisation of coverage, capacity, and quality as it is the case, e.g., in the Cell Outage Compensation use case (see Section 8.6).

The accuracy of an X-map depends on a multitude of factors. In this section, the influence of the applied positioning method (see Section 8.7.3.1), of the UE measurement accuracy (see Section 8.7.3.2) as well as of the number of measurements taken (see Section 8.7.3.3) on the accuracy of the resulting X-maps is analysed.

8.7.2 Algorithmic Approach

8.7.2.1 X-Map Estimation Function

A UE Location and Measurement Unit (LMU), as shown in Figure 55, gathers the measurement data (e.g., the Reference Signal Received Power (RSRP) or the Reference Signal Received Quality (RSRQ)), the location data from the UEs in the network as well as the time when the measurements have been taken. In addition to the LMU, a RAN Measurement Unit (RMU) is introduced which collects the measurement data from the Radio Access Network (RAN), e.g., interference and load. The measurement data from both Measurement Units together with the positions of the UEs are then delivered to the X-map estimation function which uses this data directly for updating the corresponding bins in the X-map (approach 1) or for calibrating a propagation model (approach 2).

Figure 55: Overview of the X-map estimation approach

As shown in Figure 55, approach 1 uses the UE and RAN measurement data together with the estimated positions to update the corresponding bins in the X-map where the UEs are located. Essentially, each time a UE report becomes available, the corresponding bin of the X-map is updated. Various update mechanisms are possible, e.g., replacing the current value in the bin or taking previous measurements into account by using, e.g., an exponential filter approach. In the current implementation, we form the average of the value in the X-map and the UE reports available at a definite time.

Approach 2 uses prediction data created by means of a propagation model which is adapted to the environment of the corresponding eNodeB in the propagation model calibration process. In the current implementation, the Okumura-Hata model [1] is used and correction factors are calculated for the considered land use classes based on the collected measurement data in the calibration process. By this calibration the accuracy of the propagation model can be improved.

8.7.2.2 Position Error Modelling

In order to consider different positioning methods, positioning errors are calculated which are then added to the true positions, thus, obtaining estimated positions. For calculating the positioning errors, the approach described in [23] is used. The modelled position error depends on the geometry between transmitter and receiver, on the number of measured signals, and on the standard deviation of the measurement errors. In the simulations, the Global Positioning System (GPS) and Observed Time

UE Location and Measurement Unit (LMU)

Bin UpdatePrediction

Data

UE/RAN Measurement, Time, Position

Measurement, Time, Location Data

RAN Measurement Unit (RMU)

UE1 UEn

Propagation Model Calibration

CalibratedParameters

Propagation Model

X-Map EstimationFunction

approach 1: approach 2:

X-Map

Page 105: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 105 (135)

Difference of Arrival (OTDOA) are considered which are foreseen for LTE [1]. Furthermore, the combination of GPS and OTDOA is considered in the following way. First, positioning is attempted using GPS. If GPS cannot provide a valid position estimate, i.e., if less than four satellites are visible, then OTDOA is used as positioning method.

The visible satellites for each user position are determined with the help of a ray tracer and satellite orbits which are available for a specific date and time. A satellite is assumed to be visible if the direct path between the satellite and the UE exists, i.e., reflections are not considered. An eNodeB is assumed to be detectable by the UE if the RSRP is greater than or equal to -124 dBm and if the SINR is greater than or equal to -6 dB [24]. In order to increase the hearability of the eNodeBs when using OTDOA, a special positioning reference signal (PRS) with a frequency reuse of six as well as low-interference subframes (LIS) are introduced [25]. This is modelled by excluding the six strongest cells from the interference calculation which is considered for all presented OTDOA results.

8.7.2.3 RSRP Modelling

The measured RSRP values are filtered by the Layer 1 filter (L1 filter) and the Layer 3 filter (L3 filter) as can be seen in Figure 56.

Figure 56: RSRP modelling

The design of the L1 filter is not standardised. Since the used scenario (see [28] for more information) provides an RSRP value every 100 ms, a linear averaging of two RSRP values is chosen to get a value Mn every 200 ms. This value is then put into the L3 filter which is defined according to [15]. The filter coefficient is chosen to be 4 (default value, see [15]). The output of the L3 filter is mapped to the measurement report which has a resolution of 1 dB in the range from -140 dBm to -44 dBm [24]. This measurement report is then sent to the eNodeB every 200 ms [24].

It is assumed that the UE measurements can be affected by measurement inaccuracies due to, e.g., the electronic components in the UE. In order to consider possible UE measurement inaccuracies, four different UE classes are introduced. The UE inaccuracies are modelled with the help of a normal distribution N(, ) with a mean of and a standard deviation of . The corresponding parameters of the four different UE classes are summarised in Table 13. The values are chosen according to [27] where a root mean square error (rms) of 3 dB is chosen for modelling the UE measurement inaccuracies.

Table 13: UE classes for modelling the UE inaccuracies

UE class 1 N(0, 0) (real value)

UE class 2 N(0, 0.5)

UE class 3 N(0, 1)

UE class 4 N(2, 0)

8.7.3 Results

8.7.3.1 Results for X-maps Using Different Positioning Methods

In Figure 57, two examples of X-maps showing the received signal level in dBm for a specific site of the realistic scenario, described in [28], can be seen. On the left side, approach 1 is used to create the X-map, whereas the X-map on the right side is based on approach 2. For both X-maps, the real positions provided by the scenario are applied.

Page 106: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 106 (135)

Figure 57: X-maps based on approach 1 (left) and based on approach 2 (right) using the real positions

In order to determine the accuracy of the X-maps, every pixel in the X-map with an available RSRP value is compared to the provided prediction of the corresponding eNodeB (i.e., the true value). This means that in case of approach 1 only those pixels are considered which are covered by UEs, whereas in case of approach 2 all pixels in the considered area are used for determining the accuracy (including more pixels than used for the calibration process). The advantage of approach 1 is that the resulting X-map reflects the real situation in the network since the measurement data is used directly for creating the X-map. However, the disadvantage is that the X-map provides RSRP values only for those pixels which are covered by the UEs (in Figure 57, dark red means that there is no RSRP value available). In contrast, every pixel in the area has an RSRP value when using approach 2, i.e., the coverage of the X-map is greater compared to approach 1. The accuracy, however, is not as accurate as for the X-map based on approach 1 with a mean error of 2.1 dB and a standard deviation of 6.6 dB as shown in Table 14.

Table 14: Mean error (mean), standard deviation (std), and root mean square error (rms) for different X-Maps using the real positions

approach 1 approach 2

mean Std rms mean std rms

0.0 0.2 0.2 2.1 6.6 7.0

The accuracy for approach 2 is in the range of what can be achieved with calibrated propagation models for small macro and micro cells. In [1], about 7 dB up to 9 dB is mentioned as the average standard deviation of the prediction error of the presented models. The absolute values of the mean errors typically range from about 0 dB up to 6 dB. For the corresponding analyses, building data and measurement data at 947 MHz were available for an area in downtown Munich.

Figure 58 shows X-maps of the received signal level in dBm based on approach 1 with using (a) GPS, (b) OTDOA, and (c) a combination of GPS and OTDOA as positioning method. The “bulk” of measurement data roughly in the middle of the X-map using OTDOA as positioning method are due to high position errors as a result of the hearability problem since these positions are near the serving eNodeB. However, using OTDOA as positioning method increases the number of valid position estimates in the right area of the corresponding X-maps.

(a) (b)

(c)

Figure 58: X-maps based on measurement data using (a) GPS, (b) OTDOA, and (c) a combination of GPS and OTDOA as positioning method.

This area is characterised by dense urban canyons where the probability of at least four visible satellites is low and, thus, the probability for a valid GPS position.

Page 107: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 107 (135)

As can be seen in Table 15, the rms values of the X-map based on approach 1 are increased by approximately 2 dB using GPS or the combination of GPS and OTDOA as positioning method instead of the real positions (see Table 13). The accuracy is decreased even more when OTDOA is used as positioning method (rms of 4.6 dB). The reason for this is that the measurement data is directly used for creating the X-map. Thus, the errors are directly transferred to the X-map, too. For X-maps based on approach 2, the standard deviations are very similar for the different positioning methods compared to the one using real positions (see Table 13) because the measurement data is collected over the whole considered area and over a specific time interval for determining the correction factors. Thus, the probability is high that the errors compensate each other. Consequently, the errors have a smaller impact on the accuracy of the resulting X-map. However, the rms is increased by approximately 1 dB when using a more inaccurate positioning method like OTDOA.

Table 15: Mean error (mean), standard deviation (std), and root mean square error (rms) of X-maps using different positioning methods

approach 1 approach 2

Mean std rms mean std rms

GPS 0.1 2.3 2.3 2.6 6.6 7.1

OTDOA 0.0 4.6 4.6 4.6 6.7 8.1

GPS + OTDOA 0.0 2.3 2.3 2.9 6.7 7.3

8.7.3.2 Results for X-maps Using Different UE Inaccuracies

In Table 16, the results for different UE classes (definition see Table 13) with measurement report mapping are summarised using the real positions provided by the scenario. It can be seen that the L1 and L3 filtering (UE class 1) decreases the accuracy of the X-map using approach 1 (compared to an rms of 0.2 dB without filtering, see Table 13), whereas the L1 and L3 filtering has no impact on the accuracy of the X-map using approach 2 (compared to an rms of 7.0 dB without filtering, see Table 13). The reason for this is that approach 2 collects the measurement data over the whole considered area and over a specific time interval for determining correction factors. Thus, the probability is high that the inaccuracies due to the filtering compensate each other. In contrast, approach 1 uses the measurement data directly for deriving the X-map. This means that the inaccuracies due to the filtering are transferred directly to the X-map, too. Furthermore, it is shown in Table 16 that UE class 2 and UE class3 have not a high impact on the accuracy of the X-maps using approach 1 or approach 2 compared to UE class 1 where no inaccuracies are considered. The rms values are almost equal (differences of max. 0.1 dB). However, UE class 4, where an offset of 2 dB is considered, produces a mean error which is 2 dB higher than the mean error for UE class 1 for both approaches.

Table 16: Mean error (mean), standard deviation (std), and root mean square error (rms) for different UE classes with measurement report mapping

approach 1 approach 2

mean std rms mean std rms

UE class 1 0.1 0.7 0.7 2.1 6.7 7.0

UE class 2 0.0 0.6 0.6 2.1 6.7 7.0

UE class 3 0.1 0.6 0.6 2.1 6.6 7.0

UE class 4 2.1 0.7 2.2 4.1 6.7 7.8

8.7.3.3 Results for X-maps Using Different Simulation Time Steps

In Table 17, the results for different simulation time steps are shown for approach 2. It is assumed that a new measurement value is available every simulation time step. If the simulation time step is higher, less measurement data is available for determining the X-map. It can be seen that the accuracy of approach 2

Page 108: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 108 (135)

is reduced for higher simulation time steps. The rms is increased from 7 dB for a simulation time step of 5 s to 7.6 dB for a simulation time step of 50 s. However, the accuracy of the X-map for a simulation time step of 50 s with a mean error of 3.9 dB and a standard deviation of 7.0 dB is still acceptable.

Table 17: Mean error (mean), standard deviation (std), and root mean square error (rms) for different simulation time steps using approach 2

simulation time step in s number of measurement data mean std rms

0.2 100020 2.1 6.6 7.0

1.0 20020 2.2 6.7 7.0

5.0 4020 2.2 6.6 7.0

10.0 2020 2.4 6.7 7.1

50.0 420 3.9 7.0 7.6

Figure 59 shows X-maps of the received signal level in dBm based on approach 1 with using a simulation time step of (a) 200 ms, (b) 5 s, and (c) 50 s. As can be seen in Figure 59, the simulation time step has a high impact on the “coverage” of approach 1. A higher simulation time step and, thus, less measurement data results in less pixels with a valid RSRP value in the resulting X-map (in Figure 59, dark red means that there is no RSRP value available). Using a simulation time step of 200 ms, 12 % of the whole area is covered with a valid RSRP value. This number is reduced to 6 % for a simulation time step of 5 s and further reduced to 1 % for a simulation time step of 50 s.

(a) (b)

(c)

Figure 59: X-maps based on approach 1 using the real position and an update interval of (a) 200 ms, (b) 5 s, and (c) 50 s

8.7.4 Conclusions

Two approaches for creating geographic maps with overlay performance information, referred to as X-maps, have been introduced. In approach 1, the measurement data together with position information is used directly for deriving an X-map. Approach 2 uses a calibrated propagation model for calculating an X-map.

It has been shown that approach 1 is very accurate since the measurement data are used directly for deriving an X-map. However, information about the network performance can only be provided for those parts where measurement data are available which is an important issue if the update interval for the measurement data is increased and, thus, the number of available measurements is decreased. Furthermore, this approach strongly depends on the positioning accuracy. In contrast, approach 2 provides a complete picture of the network performance for the whole considered area. Furthermore, this approach is relatively insensitive to the applied positioning method, possible UE inaccuracies, and the number of measurements taken, respectively. The reason for this is that the measurement data is collected over the whole considered area and over a specific time interval for determining the correction factors. Thus, the probability is high that the errors compensate each other. However, this approach is less accurate than approach 1 although the accuracy is improved by the calibration compared to the uncalibrated prediction.

Page 109: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 109 (135)

8.8 Automatic Generation of Initial Parameters for eNodeB Insertion

8.8.1 Description of the Use Case

The use case Automatic Generation of Initial Parameters for eNodeB insertion (AGP) deals with the process of integrating new network elements (NE) into an operational network. Several approaches are possible for the introduction of network elements, reflecting an evolution from necessary manual intervention towards fully automatic introduction:

Pre-configuration of network elements (e.g., manually on-site or in the factory): this approach actually describes today’s situation, which shall be avoided in future due to its high expenses.

Pre-configuration of network elements with some default values, specific values are determined after initial boot: this approach could, e.g., provide the network element with some standard or best-practice values before it is introduced in the network. The final network element (NE) and site-specific values then have to be assigned either manually or by automated mechanisms.

No pre-configuration of network elements, specific configuration is determined and installed after initial boot: the NE is delivered to the site completely without configuration, except for some standard software for initial NE start-up. The complete software and configuration is assigned during the self-configuration procedure, including radio settings, neighbourhood configuration, etc.

The approach to AGP developed in SOCRATES introduces a hybrid solution between these three approaches.

For the introduction of a new NE, several different types of parameters are to be assigned. For some of these parameters it is useful to provide default parameters, since the likelihood that they will have to be changed after the installation of the NE is completed is rather high, or they cannot be optimised automatically. For other parameters, an automated setting of initial parameters in a way that is adapted to the specific deployment situation does make perfect sense.

Soft integration is introduced as a concept to gradually integrate a new NE into a network, while using feedback from the network to determine the “final” configuration. The idea is to make intentional changes in the network configuration as little disruptive as possible. Moreover, the risk of deploying NEs with a configuration incompatible with its environment shall be reduced to a minimum. Although this concept has a wider range of possible applications, the focus is on the integration of new eNodeBs. All computational experiments and the specific proposals of how to use soft integration are made for the integrations of eNodeBs. The classical approach of hard integration, as it is called here, is to first deploy and fully configure an NE before integrating it into the network. This can be seen as an extreme case of soft integration.

The idea behind soft integration is, however, to seamlessly integrate a new eNodeB in a sequence of steps. In the course of these steps, the eNodeB is playing an increasingly noticeable role within its vicinity in the network. This enables surrounding eNodeB to adapt by means of self-organisation. Among the benefits expected from soft integration are:

Reduced requirements for complex, synchronised changes to the network configuration

Reduced requirements for identifying good network configurations without network feedback

Reduced risk of disruptive configuration changes (as small changes are applied at a time)

Better integration of self-configuration and self-optimisation (complementing roles)

In the following, the soft integration concept for integrating new eNodeB into the network is described in more detail.

8.8.2 KPIs and Control Parameters

A dynamic radio configuration function potentially addresses a large number of parameters that could be automatically configured during eNodeB insertion. A subset of these is considered in the following for soft integration, namely the maximum transmit power as the control parameter driving the soft integration and the (electrical) antenna tilt as an optimisation parameter. This is justified by the following reasoning: The proposed soft integration scheme is focussing on the transitional phase from the eNodeB’s first presence in terms of radio transmission until the hand-off to the regular self-optimisation regime. Almost all of the identified relevant radio parameters can already be determined prior to the start of the soft integration. This is either done manually or by means of some traditional off-line planning automation.

Specifically, this applies to

eNodeB identifier

Page 110: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 110 (135)

Local cell ID

Antenna azimuth (and mechanical antenna tilt)

Physical cell ID

PRACH preamble

Two other parameters, namely,

Tracking area code

Cell reselection parameters

could be derived based on the cell’s type and the parameter settings in the surrounding. This is, however, not further investigated here. Finally, the

Neighbour relationship

should be set and maintained either by the functionality supporting the 3GPP Automatic Neighbour Relation use case or by the AGP functionality. This is subject to the integration with the Handover Optimisation use case. Thus, in the following the focus is exclusively on the parameters maximum transmit power and (electrical) antenna tilt.

8.8.3 Observability and Controllability

In preparation of devising a specific approach for the soft integration of eNodeBs several computational studies were conducted. The purpose of these studies was to analyse with effects on an “optimal network configuration” the addition of a new site may have. In line with the general direction of the work, the focus is on antenna tilt and maximum transmit power. Together with sectorisation of the site, the antenna types, and the azimuth of each sector, these are the principal parameters determining the site’s capability to provide coverage and capacity. For the latter parameters, however, it is assumed that they are determined in preceding planning steps as they have an impact on the construction work at the site.

With respect to the two parameters analysed, namely, (electrical) tilt and transmit power, computational experiments have been conducted in order to address the following questions:

1. How well suited is transmit power as optimisation parameter?

2. How much does the “optimal network configuration” change in the course of a new site gradually increasing its power?

3. How many other sectors should reasonably be adapted when a new site is introduced?

4. How dependent is the “optimal network configuration” on the specific traffic observed (sampling from the same distribution)?

5. How sensitive is the “optimal network configuration” with respect to the intensity of traffic (keeping its spatial distribution fixed)

The conclusion that has been drawn from these investigations is to fix a target maximum transmit power and to stepwise increase the maximum transmit power to this predetermined target value as the driver for soft integration. The tilt values at the new sites are repeatedly adjusted to match the present power settings. The site including its surrounding are only optimised once the site has reached its predetermined power values. This optimisation is conducted in two rounds, first a closer surrounding, then a larger surrounding. Both optimisations shall be based on high traffic intensities in order to foster robustness of the result with respect to variations in the traffic intensity.

8.8.4 Algorithmic Approach

This section deals with a possible implementation of soft integration. The multi-stage approach described in the following strongly relies on the ability to perform off-line network evaluations prior to actually integration a new NE. The different stages and their interplay are explained in the following parts in a general setting.

Prior to the eNodeB integration, parameter settings for the new eNodeB and its neighbour eNodeBs are pre-calculated utilising off-line planning methods in focus, the soft integration can most easily be achieved by starting with low (reference signal) power settings and high tilt angles for the eNodeB to be inserted and unmodified parameters for neighbour eNodeBs. Under regular circumstances, these parameters shall be altered following a migration path without intervention by other SON functionalities. In principle, the migration path may be subject to the specific off-line planning method, but in many cases gradual changes can be expected to be suitable. In case of incidents, the migration path may be interrupted or terminated by other SON functions.

Page 111: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 111 (135)

A soft integration approach comprising four stages is proposed for the self-configuration of a new eNodeB with respect to the radio parameters maximum transmit power and (electrical) antenna tilt.

1. Initialisation: Observation Phase The site’s location, the surrounding network configuration, as well as the typical traffic distribution are known. If live measurements or X-maps can be obtained (cf. Section 8.7) these sources are polled for up-to-date coverage, interference, and traffic information. Based on the available data, an optimised configuration of the new site is computed with respect to power and tilt settings.

2. Activation: Integration Phase The new site is configured and activated with low power and high tilt settings. In a transitional phase, the maximum transmit power is slowly increased as to leave the UEs and the faster SON functions the surrounding stations sufficient time to adapt to the new situation after every change. The tilts at the new site are optimised based on feedback on interference from the surrounding sites. The activation phase shall take place at times of low traffic in order to minimise service disruptions.

3. Arranging With Neighbours: First Tuning Phase Once the new eNodeB is activated and has locally adapted to its surrounding, the configurations of the new eNodeB and its neighbours (as active at the beginning of the phase) are re-optimised or optimised, respectively, to improve the local integration of the new eNodeB. In order to achieve proper decoupling of cells, this optimisation shall take place at high traffic time. The optimisation shall be performed in a coordinated manner (similar to what an off-line optimisation would do). One option to do this is based on the data collected before or at the beginning of the first tuning phase. In this case, some off-line optimisation method can be used. Despites the lack of strong computational evidence, it seems reasonable to limit the step-size of the changes in the tuning phase.

4. Optimising Larger Surrounding: Second Tuning Phase If deemed appropriate, a second tuning phase can be conducted. In the second round a larger neighbourhood is optimised, e.g., the new eNodeB, its neighbours, and the neighbours of the neighbours. The same approach as for the first round can be used. Again, an optimisation based on high / peak traffic is suggested.

Both tuning phases rely on the capability to call for coordinated changes at other eNodeBs. This functionality may be provided by operating self-configuration functionalities at all NEs and setting-up ad-hoc self-configuration networks across the relevant NEs to implement a coordinated action.

8.8.5 Results

Within the SOCRATES project there is no option to simulate a full-blown soft integration in a realistic environment spanning a period of several days. Instead, a demonstrator for the AGP functionality has been devised, which focuses on the seamless integration of the new site but does not model all measurements, protocols, and interfaces relevant to the integration in full detail. The demonstration is based on the SOCRATES scenario for a large area around a European city [28]. Specifically, the impact of the proposed AGP functionality is demonstrated over the course of three days during which a typical fluctuation of the traffic intensity is simulated. All three days follow the same pattern, and there is no variation in the spatial distribution of the traffic. A sequence of screenshots from the demonstrator is be used to explain how the proposed AGP approach may control the changes of power and tilt setting in the course of integrating a new eNodeB. The display of the demonstrator is arranged into four parts:

Status information is displayed in the top row.

The network display area at the left-hand side contains a graphical depiction of the network configuration. Three different colours are used for the sectors. A sector is gray if it cannot be changed (at the present time), it is blue if it has been changed in the course of the soft integration, and it is coloured white if it could have been changed, but has not. A sector is displayed enlarged for some time following a change. The resulting cell areas are colour-code in blue according to the cell assignment probability. In case the offered traffic cannot be served by the network, the areas with quality problems are indicated using colours from yellow (light traffic loss) to red (heavy traffic loss).

Two performance metrics are encoded at the upper right-hand side. The blue curves give information for the whole area, whereas the yellow curve focuses on the area around the site to be inserted (this is determined based on where either a coverage or an interference contribution from the inserted site is possible). The upper two curves give a network quality rating with respect to the left scale. The quality is measured in terms of missed traffic, that is, (fractional) demand which cannot be serviced. The lower two curves trace how much traffic can be supported per Watt of transmission power. The higher the value is the better use of the energy is made.

Page 112: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 112 (135)

The evolution of the traffic intensity is displayed at the lower right-hand side. A vertical line represents the time horizon and is moving from left to right through the graph. Three time spans are highlighted:

During the observation phase preceding the active phases of the soft-integration, as much as possible relevant data is collected. In the setting of the demonstrator, off-line planning data plus the time-dependent traffic intensities are used. In a realistic implementation further data sources would be measurement reports from network equipment (LTE and possibly co-deployed technologies) as well as user equipment. Furthermore, data from the X-Map estimation functionalities may be available.

Hardly any quality problems are noticeable at times of low traffic. With increasing traffic the quality problems become more apparent. The problematic areas observed at peak traffic are marked in Figure 60.

Figure 60: During the observation phase for peak traffic intensity: Quality problems in various areas are observed. The red polygons highlight the most problematic areas

Figure 61: At the end of the integration phase with medium traffic: Quality problems in the area around the new sites are reduced, but some remain. The red polygons highlight the most

problematic areas at peak traffic (without the new site). The new site has now a large footprint (green).

Figure 61 illustrates the integration phase. The integration starts at a time of low traffic. Over a sequence of steps the footprint of the new site increases, corresponding to the increments of the maximum transmit power. Figure 61 depicts tilt settings and quality at the end of the integration phase. All three sectors now use their pre-computed maximum transmit power, and the new site has attained a large footprint. The antenna tilts are adapted to best fit into the current surrounding.

Figure 62 depicts the setting at the end of the first tuning phase. The tuning is conducted at the first time that high traffic is observed following the integration. The performance problems around the new site’s

Page 113: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 113 (135)

location are clearly reduced in comparison to the situation without the new site. When comparing the traffic supported per Watt of transmission power, a slight improvement is noticeable.

Finally, Figure 63 presents the situation at the end of the second tuning phase. The tuning is conducted at the next day, when high traffic volumes are reached for the second time after the integration. The improvements in network quality are minor. There is a minimal increase in the power efficiency, reflecting a decrease in interference coupling among cells. The soft integration ends after the second tuning phase. The AGP functionality is no longer performing changes to the network configuration and regular self-healing and self-optimisation operations is resumed—with self-optimisation of the new eNodeB fully enabled.

Figure 62: At the beginning of the first tuning / optimisation phase near peak traffic: Quality problems in the area around the new sites are clearly reduced. Sectors in the vicinity of the new site

are marked (white or blue) if they are subject to tuning

Figure 63: At the end of the second tuning / optimisation phase with high traffic: Quality problems in the area around the new sites are mostly removed. A new area with moderate problems can be

traced north east of the site at the edge of its service area. All sectors coloured blue have been changed in the course of the second tuning phase

8.8.6 Conclusions

Two of the key challenges in self-configuration are explicitly addressed. This is, first, the lack of reliable information on the impact of the new eNodeB on its environment. Second, the fact that self-healing / self-optimisation functions shall be able to continue their operations at surrounding eNodeBs without the need

Page 114: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 114 (135)

to tightly control their actions during the integration of the new eNodeB. Both contributions will facilitate the introduction of (more) self-configuration into LTE radio networks.

The novel concept called soft integration is proposed for the automatic generation of initial eNodeB parameters. Soft integration is characterised by four features: First, a pre-deployment optimisation of an initial configuration that hardly incurs changes to the previous network configuration. Second, the site increases its impact on the network performance (by means of increased transmission power). Third, a steady adaptation of the site’s configuration based on a pre-optimised trajectory as well as network feedback. And, fourth, the configuration of the surrounding sites is updated to best integrate the new site, again, based on pre-deployment optimisation as well as network feedback. This concept is likely applicable in a much broader context than presented here. Soft integration bases on the assumption that sufficient information is available in order to conduct what is typically done as pre-deployment planning or optimisation. At the same time, however, it is assumed that this information is imperfect and incomplete. Hence, the optimisation result would look different, if all information were perfectly available. The way chosen to address this is to first do a (classical off-line) pre-deployment optimisation and then to collect more information from the network about potential misconfigurations or performance problems. Taking into account this additional information, the parameter configuration of the new eNodeB and subsequently of surrounding eNodeBs is re-optimised.

One further aspect motivates soft integration, namely, the adaptability of the surrounding network elements based on their self-organisation capabilities. Drastic local changes in network topology might trigger strong reactions from self-healing as well as from self-optimisation functions. In the course of the soft integration, the surrounding eNodeBs perceive steady, yet moderate changes to which they may gradually adapt themselves. Moreover, first measurements involving the new eNodeB may become available and already update the target configuration.

The development of the demonstrator as well as supporting simulations have helped to make several choices in the design of the proposed function for the automatic generation of of initial eNodeB parameters: the use of the transmission power as the integration driver; to split the soft integration into the integration and two subsequent rounds of optimisation; the optimisation is conducted around peak traffic, because high traffic intensity is needed to observe the effect of interference; settling for two rounds of optimisation during the deployment with increasing scope helps to limit the changes to the original network configuration.

The proposed approach to the Automatic Generation of initial eNodeB Parameters noticeably reduces the effort that an operator will need to invest into planning and configuring parameters for new equipment. Moreover, the parameter values at the end of the integration are already optimised to the specific local environment and the immediate neighbours are, in fact, adapted as well. This will result in much better post-deployment performance than known from today’s network operation and contribute to an overall improved quality level.

Page 115: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 115 (135)

9 Detailed Description of Integrated SON Functions It is likely that a mobile networks operator will not only introduce one single SON function to his network, as due to the advantages (e.g., regarding OPEX savings) of the single SON functions an operator may be tempted to introduce several of them. However, when introducing several different SON functions, there may be interrelations between these SON functions, especially regarding the concurrent modification of the same configuration parameters these SON function control. Therefore, several integrated use cases, i.e., pairs of interrelating use cases, have been analysed within the SOCRATES project, to evaluate the significance of these interrelations. While in Chapter 8 the stand-alone SON use cases have been described, this chapter focuses on the integrated use cases. These descriptions are provided in form of an extended abstract including the major findings and results. Detailed descriptions have been provided with SOCRATES deliverable D3.2

The Chapter is structured as follows: Section 9.1 describes the Handover and Load Balancing use case, Section 9.2 Macro and Home eNodeB handover optimisation, and Section 9.3 Admission Control and Handover Optimisation.

9.1 Handover Parameter Optimisation and Load Balancing

9.1.1 Description of the Use Case

In handover parameter optimisation (HPO), the number of radio link failures, handover failures and ping-pong handovers are reduced by adjusting the individual handover parameters for every cell in the network. The handover optimisation algorithm is executed locally at cell level and only uses local handover performance indicators for the optimisation. The load balancing (LB) algorithm is activated if a certain cell is overloaded, i.e., a certain load level is reached in a cell. In order to avoid link level failures and unnecessary blocking, some of the cell edge users are handed over to neighbouring cells before the load level reaches the maximum. The algorithm assures that the best suited neighbouring cells are selected for load balancing if the load of their neighbours is considered for the optimisation.

The HPO and LB algorithms influence different control parameters for the optimisation. The hysteresis (Hys) and time-to-Trigger (TTT) are changed by the HPO algorithm, whereas the LB algorithm adapts the HO offset (HOoff) only. However, the consequence of Hys and HOoff control parameter changes on the network performance is similar for these two optimisation algorithms since both influence the HO process and neighbour cell penetration.

Figure 64 shows the mode of operation of the HO and LB algorithms and their control parameters. As shown in the left part of the figure, the hysteresis defines how far a user may penetrate the area of a neighbouring cell before a handover is initiated. In the intersection area of the two cells, users from both cells reside without handing over to the other cell. An increase of the Hys of the source eNodeB (SeNB) for example leads to a larger intersection area for users that are connected to this cell. The condition for the target eNodeB (TeNB) users does not change in this case except for other effects that influence the intersection area like interference fluctuations for example. The TTT influences the penetration of a neighbouring cell and thus the intersection area of these two cells as well. But unlike the Hys, the TTT is a time dependent value. This results in different handover locations for low- and high-speed users. The area in which handovers are executed is the shaded area shown in the left part of Figure 64. The left boundary of this handover area is defined by the hysteresis. The right boundary of this area is defined by the time-to-trigger value as shown in the figure. The effect of HOoff changes is visualised in the right part of Figure 64. In opposite to the Hys values, the HOoff values are set for cell pairs and hence influence the effective hysteresis for some of the users, depending on which TeNB they will perform handover to. This HOoff is changed by the LB algorithm to ensure that the users that have been handed over to neighbouring cells in order to decrease the load in an overloaded cell do not get handed back immediately.

The aim of this use case is to analyse the interaction of the two optimisation algorithms when running them simultaneously. The influence of the control parameters that have similar effects on the system performance and the possible counteraction of the two optimisation algorithms are the key challenges for a successful implementation in a real network. It has to be avoided that the HPO algorithm for example decreases the Hys value of a target cell for load balancing users again in order to counteract the increasing radio link failure ratio. In this case the users would be handed back to the overloaded cell after some time and the overall performance of the system would decrease again. Based on a theoretical examination of possible interactions between the self-optimisation algorithms, the performance of a network is examined in system level simulations. Subsequently, different approaches to coordinate the HO and LB SON algorithms are described and the influence on the overall system performance is investigated.

Page 116: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 116 (135)

Figure 64: Handover Parameters Optimisation and Load Balancing schemes

In the analysis of the system performance with enabled optimisation algorithms it is shown that the optimisation algorithms indeed influence each other. The interaction of the optimisation algorithms results in a degradation of the performance of the individual optimisation algorithms and hence in a sub-optimal overall system performance. The proposed coordination techniques minimise this interaction and increase the system performance again.

9.1.2 KPIs and Control Parameters

A general overview of the observed KPIs, control parameters and input parameters for the LB and HPO algorithms is shown in Table 18. Both SON functions have the goal to improve the network performance and finally increase the user satisfaction. While the LB algorithm is triggered on an event base, i.e., an overload situation, the HPO algorithm continuously tunes the handover parameters to improve the handover performance if possible. Hence conflicts between the two algorithms will only arise if the network is at least partly overloaded. On the bases of the handover parameters HO offset, hysteresis and TTT both algorithms influence the area in which handovers will take place. While the LB algorithm is a decentralised solution that needs to exchange the X2 LOAD INFORMATION with neighbouring cells the HPO algorithm is implemented locally and only uses the cell individual HPIs, RSRP and SINR measurements provided by the UE and UE and eNB history information for the handover performance improvement.

SeNB TeNB

Hysteresis

Distance

SeNB TeNB

HO optimisation Load Balancing

SeNB TeNB

TTT

TTT

SeNB TeNB

Hysteresis

Distance

SeNB TeNB

SeNB TeNB

HO offset

LegendHysteresis

Time-to-Trigger

Connected Cell

Interfering Cell

User (UE)

Handover Offset

eNodeB

Handover area

Page 117: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 117 (135)

Table 18: Load Balancing and HPO SON function comparison

Load Balancing Handover Parameters Optimisation

Observed KPIs Virtual load, Number of unsatisfied users, RLF ratio, HO ping-pong ratio, HO failure ratio

Control parameters

HO offset Hysteresis, TTT

Input parameters

Measurements (RSRP, RSRQ, RSSI), X2 LOAD INFORMATION

Measurements (RSRP, SINR), HPIs, UE and eNB history for ping pong HO detection

Both optimisation algorithms act based on observations on the network translated into KPIs at cell level or network level, on O&M interfaces, or at specific measurements, available through standardised interfaces. The virtual load and HPIs are input values into the load balancing and handover optimisation.

9.1.2.1 Potential Conflicts

As noted in Table 18, the HPO and LB algorithms tune control parameters that influence the handover procedure. And both algorithms observe metrics, which are influenced by changes of the handover parameters as well.

Figure 65: Interactions between HPO and Load Balancing algorithms.

Figure 65 shows the relations between the two SON algorithms, the control parameters and the KPI metrics. The control parameters of the LB and HO algorithms are independent of each other, i.e., the two algorithms do not tune the same parameters. However the observations on the system are interacting since the control parameters influence all KPI metrics that are used as input for the optimisation algorithms. So for example a change in the HO offset has impact on the observations of virtual load –an input to the Load Balancing algorithm- and at the same time to all three HPIs which are used for the HPO algorithm.

9.1.3 Scenarios

The investigations on the interaction between the HPO and the LB algorithms require a simulation scenario that provides load fluctuations to enable load balancing gains on the one hand and realistic irregular cell shapes to enable handover performance gains using different handover parameters on the other hand. Thus an irregular network layout called the springwald scenario [50]has been selected as simulation network layout. The load variations are introduced to this scenario by a moving hotspot, i.e., a small area that contains a high number of users, which crosses the cell borders on a predefined path. In the beginning of the simulation, the hotspot appears slowly in one cell and starts moving across the

Page 118: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 118 (135)

simulated network after all hotspot users have appeared. The number of background users has been set to 14 users per cell and the number of users in the hotspot to 50 users for the present simulations.

The challenge in selecting a suitable simulation scenario is that the two self-optimisation algorithms need to be active at the same time and hence certain network conditions have to be observed. Besides, the already mentioned cell load that has to exceed the cell load level that initiates load balancing, the handover statistics have to be significant enough to deduce the current handover performance. To ensure that the speed of the users in the scenario has been set to 30 km/h which results in many handovers between adjacent cells. The irregularity of the network is challenging for the optimisation algorithms as well. LB works very well in interference limited networks, where the neighbouring cells provide a strong signal and are able to take over users from the overloaded cells. The handover optimisation is more difficult in irregular networks since the irregular shape results in both small and large overlap areas, i.e., the area between neighbour cells where a connection to either of two or more neighbouring cells is possible. The optimal handover parameter settings depend on the size of these areas and thus the cells need different handover parameters.

9.1.4 Algorithmic Approach

Three different mechanisms for coordination of the Handover Parameter Optimisation and Load Balancing actions have been developed. Two of them aim to achieve different goals imposed by the operator policy and prioritise one of the considered SON algorithms (COO1 prioritise HPO actions and COO2 prioritise LB algorithm). The third one mechanism (COO3) aims at detecting and solving abnormal situations that can appear during the operations when the stand-alone functions are run in parallel. Figure 66 shows the coordinator scheme for HPO and LB algorithms.

Figure 66: Coordinator scheme for HPO and Load Balancing algorithms.

9.1.5 Results

Results presented in Table 19 give an overview on the impact of coordination mechanisms used during simulation on observed KPIs. Results obtained from the simulation with coordination mechanism COO1 show a reduction of RLFs occurrence by almost 8% on average, which is half of the number of RLFs observed in the simulation without coordination. This gain is at the cost of a higher average number of

Page 119: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 119 (135)

unsatisfied users (more than twice) and higher average HO pp ratio. As the coordination mechanism COO2 is oriented on LB effects, it results the lowest average number of unsatisfied users from all simulation done with coordination mechanisms. Such a low number of unsatisfied users translate into degradation in RLF performance (which is still by 2% lower than in simulation without coordination) and highest ratio of HO failures. Coordination mechanism COO3 provides a visible gain in the resulting number of unsatisfied users but a very slight reduction of RLF occurrence, but this approach is aimed at the reduction of abnormal situations rather than at favouring any of the two SON algorithms. The combination of two coordination mechanisms COO1 and COO2 which run in parallel, allows the system to achieve a lower RLF ratio than in case of COO1 usage only, very low HO failure ratio and also HO pp ratio. The number of unsatisfied users is also lower than in simulation with COO1 only but only by 1 user. The lowest RLF ratios and HO failure ratio have been observed in the scenario when all three coordination mechanisms were switch on. Higher HO pp ration and lower number of unsatisfied users than in simulation with COO1 and COO2 confirm the trend from simulation when only COO3 was used. This means that the mechanism that avoids abnormal situations (COO3) does not affect other coordination actions.

Table 19: Average HPO and Load Balancing KPIs over the 20 minutes simulation length with coordination mechanisms

SON algorithm 

Unsatisfied users 

HO ping-pong ratio  HO failure ratio  Radio Link

Failure ratio 

# % % %

Reference 12 0.4 1.7 13

LB + HPO 3.2 4.8 1.7 15

LB + HPO + COO1 6.3 7.6 1.1 8.6

LB + HPO + COO2 2.7 6.2 2.0 14.1

LB + HPO + COO3 3.7 7.5 1.9 16

LB + HPO + COO1 + COO2

5.3 5.7 1.1 7.3

LB + HPO + COO1 + COO2 + COO3

4.6 6.2 1.1 6.0

Figure 67 is a graphical presentation of the handover performance HP (the formula is described in Section 8.3.3, the weights used are w1 = 1, w2 = 0.5, and w3 = 2, giving RLF highest priority) of the results from Table 19. Presented results show that from all developed coordination mechanisms COO1 provide good coordination between both SON algorithms according to the total handover performance but it costs twice higher number of unsatisfied users compared to the case without coordination. The goal of the COO2 mechanism is to achieve a low number of unsatisfied users but this time improvement in HP is rather small. Mechanism COO3 avoids abnormal situations in the network and only the LB KPI is improved. The best results for HPO and good for LB can be achieved when both coordinators COO1 and COO2 work together, also with the addition of COO3 which introduce more HO ping-pongs but reduce the number of unsatisfied users and RLFs as the HPIHPP has the lowest weight and HPIRLF highest weight in total handover performance metric.

Page 120: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 120 (135)

Figure 67: Average HPO and Load Balancing KPIs over the 20 minutes simulation length with different combination of coordination mechanisms

Both, the LB and the HPO algorithms achieve local gains by controlling system parameters at cell or cell-pair level. Here in the simulated scenario with a moving hotspot, i.e., changing the localized load or overload in cells, the performance of the algorithms can change also over time. So as an extended example for HPIs we show in Figure 68 the timelines for the HO ping-pong ratio, the RLF ratio and the HO failure ratio for a whole simulation runtime of 20 minutes and 5 different optimisation cases. The reference curve is a system with all optimisation switched off. The HO optimisation-only gives a performance at or below the reference, however in the HO ping-pong ratio even the optimisation cannot achieve minimum values. In the cases with the LB algorithm active there is an impact of the LB in overall performance as LB introduces additional handovers.

For the HO ping-pong ratio, the case with both HPO and LB activated, can achieve a better situation than with LB only, but this is dependent on the local load situation (which changes with the movement of the high concentration area). The full coordination is also around or at the same level as the LB only performance.

For the HO failure ratio, the case with both optimisations turned on achieves better performance in the later simulation time and a small gain in the earlier simulation time; in the middle of the simulation period there is a rather large statistical variance. The coordinator case achieves fair performance earlier in the simulation and is able to get better performance –even than the reference – later in the simulation.

In the RLF ratio – one of the main KPIs for a network operator – there is equal performance to the LB earlier in the simulation and better performance later on. The coordinator case exceeds all other, as it achieves better or comparable performance as the reference through the entire simulation time and even exceeds it later in the simulation.

0

5

10

15

20

25

30

35

40

reference HPO LB + HPO LB + HPO + COO1

LB + HPO + COO2

LB + HPO + COO3

LB + HPO + COO1 + COO2

LB + HPO + COO1 + COO2 + 

COO3

KPI perform

ance

Average number of unsatisfied users Handover Performance (HP):

Page 121: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 121 (135)

Figure 68: HPO and LB KPIs with the coordination mechanisms COO1, COO2, COO3, over the 20 minutes simulation length

9.1.6 Conclusions

This section presents the results of the investigations on the interactions between the HPO and LB SON algorithms when running in parallel in the network. Additionally, different coordination mechanisms are analysed which prioritise one of the optimisation algorithms or avoid abnormal situations in the network, imposing negative performance. The simulation results with the both algorithms activated without coordination shows that the LB algorithm can significantly reduce the number of unsatisfied users in the network. Unfortunately the LB algorithm decreases the HO performance as the users are forced to handover to neighbouring cells before the network would normally initiate the handover. Thereby the RLF ratio increases due to worse radio conditions of these users as well as the ping-pong handover ratio because users travelling towards the overloaded cell would be handed over to the overloaded cell due to the network condition and handed back to the SeNB due to the overload situation. Although both optimisation algorithms influence the HO decision of the users and hence interact with each other, it is possible to coordinate the LB and HPO algorithms and to increase the performance of the network using both algorithms at or even better than the independent algorithms. These coordination mechanisms operate with fixed control parameters (e.g., triggering thresholds) to coordinate the optimisation actions of the algorithms. Further work should investigate the possibility of self adaptive coordination mechanisms that may change their control parameters dynamically.

9.2 Macro and Home eNodeB Handover Parameter Optimisation

9.2.1 Description of the Use Case

In the handover optimisation use case described in Section 8.2, a self-optimisation algorithm for handover parameters was developed for macro networks, with only macro eNB to macro eNB handovers and no consideration of HeNBs deployed in the network. In parallel, the effects of handover parameter settings on the performance upon handover between a macro eNB and a HeNB and also in between HeNBs were examined in the optimisation of HeNBs use case, see Section 8.5. Based on this, a number of recommendations for self-optimisation of handover parameters with regards to HeNB handover performance were given. In a network with macro eNBs and HeNBs however, all three different types of handover should be considered together for handover self-optimisation. Deploying isolated and independent handover self-optimisation in each layer may lead to conflicts or contradictions in the optimisation actions or sub-optimal performance due to misalignment of the self-optimisation algorithms.

0 120 240 360 480 600 720 840 960 1080 12000

2

4

6

8

Time [s]

[%]

HO ping-pong ratio

reference HPO only LB only LB + HPO LB + HPO + COO1 + COO2 + COO3

0 120 240 360 480 600 720 840 960 1080 12000

1

2

3

4

5

Time [s]

[%]

HO failiure ratio

0 120 240 360 480 600 720 840 960 1080 12000

10

20

30

40

Time [s]

[%]

Radio Link Failure ratio

Page 122: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 122 (135)

The focus of the integrated macro and home eNodeB handover optimisation use case is to investigate the benefits of coordinated handover self-optimisation between the macro- and HeNB layer. A dynamic simulation tool is used for the evaluation.

9.2.2 KPIs and Control Parameters

The evaluation of the separate handover optimisation functionalities concludes that hysteresis and time-to-trigger (TTT) are suitable control parameters for handover optimisation in both the macro and the HeNB layer. However, in an LTE network today, both hysteresis and TTT are defined per serving cell, and are independent of the handover target cell. In the optimisation of HeNBs use case, also cell individual offset (CIO) was proposed as a control parameter for the HeNB handover optimisation. As the CIO can be set per cell pair, this is a suitable control parameter that can be used to differentiate between macro eNBs and HeNBs in an integrated handover optimisation algorithm.

As input to the handover optimisation algorithm, as well as for the evaluation of the handover performance, the KPIs Call Drop Ratio (CDR) and Ping-Pong Handover Ratio (PPHR) are considered. While the ideal would be a CDR and a PPHR of zero, the values of these two are normally a trade-off, i.e., a low CDR can be achieved at the cost of a high PPHR and vice versa. Definitions of these KPIs, also referred to as HPIs, handover performance indicators, are given in Sections 9.2.2.1 and 9.2.2.2, respectively. In the simulator, the HPIs are calculated for events over a time period equal to the SON algorithm time interval, i.e., the minimum time interval the SON algorithm waits between two subsequent SON actions. For statistics, the metrics are calculated once every 10 seconds, however only the latest value per SON algorithm time interval is used as input to the algorithms.

9.2.2.1 Call Drop Ratio

The call drop ratio is calculated as

 

where is the number of calls dropped at handover, and is the number of accepted

outgoing handover calls.

For the integration of macro eNB and HeNB handover self-optimisation, a separation of the call drop ratio metric with respect to the target cell type could be beneficial. Separating the CDR per target cell type also requires a separation between too early and too late handover, as the drop occurs in the target cell in case of too early handover and in the source cell in case of too late handover. For simplification, the CDR per cell type is based only on call drops due to too late handover, i.e., when the call drop occurs in the source cell. CDRH and CDRM for HeNB target cells and macro eNB target cells respectively, are calculated according to the following:

 ,

, ,

where ,

is the number of calls dropped due to too late handover to a HeNB and ,

is the number of accepted outgoing handover calls to a HeNB.

 ,

, ,

where ,

is the number of calls dropped due to too late handover to a macro eNB and

, is the number of accepted outgoing handover calls to a macro eNB.

The CDR is calculated per (source) cell when used for input to the handover self-optimisation algorithms, and for the whole network when used for evaluation of the algorithms.

9.2.2.2 Ping-Pong Handover Ratio

A ping-pong is defined as the event when the UE is handed back to the source cell within the ping-pong handover time after being handed over to the target cell. It is possible that ping-pong happens repeatedly, i.e., a UE switches between two cells repeatedly. To take this into account a single ping-pong event is defined as multiple ping-pong handovers happening consecutively. Each ping-pong handover must take place within the ping-pong handover time to be considered part of the same event. The ping-pong event will only be counted once, as a ping-pong event in the original source cell. The drawback of this definition of the ping-pong event is that it does not capture additional load in the transport network or

Page 123: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 123 (135)

higher control signalling over the air caused by the handover, where the actual number of ping-pong is of interest. The ping-pong handover ratio is calculated as

 

   

where     is the number of ping-pong events, and  

is the number of

handovers that have been performed without ping-pong occurring.

Similarly as for the CDR, the integration of macro eNB and HeNB handover self-optimisation could benefit from a separation of the ping-pong handover ratio metric with respect to the target cell type. This means that if the UE in a ping-pong handover event is handed back from a HeNB, the ping-pong handover event is included in the PPHR metric for HeNB target cells, PPHRH, and if the UE is handed back from a macro eNB, the ping-pong handover event is included in the PPHR metric for macro target cells, PPHRM:

  ,

  ,   ,

where   , is the number of ping-pong events with HeNB target cells, and

  , is the number of handovers performed to HeNB target cells without ping-pong.

  ,

  ,   ,

where   , is the number of ping-pong events with macro eNB target cells, and

  , is the number of handovers performed to macro eNB target cells without ping-pong.

The PPHR is calculated per (source) cell when used for input to the handover self-optimisation algorithms, and for the whole network when used for evaluation of the algorithms.

9.2.3 Scenarios

A number of simulations are run in order to evaluate the gains of the integrated handover self-optimisation functionality. The first simulations performed are examining the gain of the stand-alone handover optimisation functionality. This is especially important as the simulator used for the home eNodeB and macro eNodeB integrated handover optimisation simulations is different than what was used for developing the macro eNodeB handover optimisation. For these simulations, a hexagonal scenario with three sites, each with three sectors, and wrap-around is used. Log-normal fading with a standard deviation of 8 dB is considered and the maximum transmit power from the eNBs is 43 dBm. Five UEs per macro cell are randomly uniformly distributed and move in a random start direction at a speed of 50 km/h. A direction change probability is applied to change the direction of movement of the user.

In a next step, HeNBs are added to the scenario. In order to capture both macro to macro eNodeB handovers, home to home eNodeB handovers and handovers between macro eNodeBs and home eNodeBs in both directions, a group of three times three houses with open access home eNodeBs are placed in a macro cell. In order to increase the number of UEs moving in and out of HeNB coverage and thus increase the statistical significance of the HeNB handover statistics, 20 UEs are added moving along a specified path between the HeNBs. The additional UEs are generated in locations randomly chosen in this path. The HeNBs have a maximum transmit power of 20 dBm and the HeNB layer has a lognormal fading with a standard deviation of 4 dB.

The integrated handover optimisation is evaluated by comparing the gains in HeNB scenario when running the simulation using a) default handover parameter settings in all eNBs, b) the macro handover optimisation turned on in all eNBs, and c) the integrated handover optimisation turned on in all eNBs. In the simulations, a TTT of 320 ms is used and as default, when not tuned by an algorithm, the hysteresis is set to 3 dB.

9.2.4 Algorithmic Approach

A simplified version of the macro handover optimisation algorithm developed within the handover optimisation use case in Section 8.2, is considered. The macro algorithm uses two HPI thresholds, the call drop ratio threshold, CDRThres, and the ping-pong handover ratio threshold, PPHRThres.

Page 124: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 124 (135)

If (PPHR < PPHRThres and CDR > CDRThres)

Decrease hysteresis by 1 dB

If (PPHR > PPHRThres and CDR < CDRThres)

Increase hysteresis by 1 dB

If (PPHR < PPHRThres and CDR < CDRThres)

Decrease HPI thresholds by 0.02

If (PPHR > PPHRThres and CDR > CDRThres)

Increase HPI thresholds by 0.02

Note that only hysteresis is used as a control parameter. Adjusting also the TTT would increase the possibility for the handover optimisation to improve the handover performance, but was not included in the algorithm due to simulator implementation time constraints. However, as the values of the HPIs are depending on the TTT, the TTT should be set to a fix value when running the algorithm.

For a deployment containing both macro eNBs and HeNBs, the optimal operating point may be vastly different depending on the type(s) of cells neighbouring each other. For example, the required hysteresis and CIO settings to achieve optimal PPHR and CDR may be completely different for the case of macro eNB to a HeNB handover compared to the case of a macro eNB to macro eNB handover. The reason for this is that propagation conditions might vary significantly based on the given handover scenario. As a result, ‘system-wide’ handover settings may not be sufficient to achieve optimal performance throughout the network. The integrated handover optimisation algorithm, that aims to optimise handover in a network containing both macro eNBs and Home eNBs, should allow for freedom to adjust parameter settings based on the type of handover that is occurring, hence the HPIs for the macro eNBs and home eNBs are separated as described in Section 9.2.2, and corresponding thresholds are used.

The integrated algorithm is shown in Table 20. In essence, the CDRM, PPHRM and their respective thresholds are considered separately, and the CDRH and PPHRH and their respective thresholds are considered separately. The same rules as in the macro handover algorithm are followed, but instead of changing the hysteresis, the CIO is changed for macro or HeNB target cells. However, in the case when both macro and HeNB CIO are to be changed in the same direction, the hysteresis is changed instead.

Page 125: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 125 (135)

Table 20: Integrated algorithm

PPHRM PPHRH CDRM CDRH Action 1 Action 2

< PPHRM Thres

< PPHRH Thres

< CDRM Thres < CDRH Thres HPI Thres ↓

> CDRH Thres HPIM Thres ↓ CIOH ↓

> CDRM Thres < CDRH Thres CIOM ↓ HPIH Thres ↓

> CDRH Thres Hyst ↓

> PPHRH Thres

< CDRM Thres < CDRH Thres HPIM Thres ↓ CIOH ↑

> CDRH Thres HPIM Thres ↓ HPIH Thres ↑

> CDRM Thres < CDRH Thres CIOM ↓ CIOH ↑

> CDRH Thres CIOM ↓ HPIH Thres ↑

> PPHRM Thres

< PPHRH Thres

< CDRM Thres < CDRH Thres CIOM ↑ HPIH Thres ↓

> CDRH Thres CIOM ↑ CIOH ↓

> CDRM Thres < CDRH Thres HPIM Thres ↑ HPIH Thres ↓

> CDRH Thres HPIM Thres ↑ CIOH ↓

> PPHRH Thres

< CDRM Thres < CDRH Thres Hyst ↑

> CDRH Thres CIOM ↑ HPIH Thres ↑

> CDRM Thres < CDRH Thres HPIM Thres ↑ CIOH ↑

> CDRH Thres HPI Thres ↑

9.2.5 Results

The first simulations were performed using the macro eNBs only scenario described in Section 9.2.3, with and without the macro handover optimisation algorithm turned on. Results show that with the macro handover optimisation, the CDR goes from a high to a lower value, while the PPHR goes from a low to a higher value, see Figure 69 This is expected behavior and is due to the trade-off between the call drops and ping-pong – the CDR is decreased at the cost of a higher PPHR.

Figure 69: Handover performance for the macro only scenario with no optimisation (left figure), and with the macro handover optimisation algorithm running (right figure)

In a second step, HeNBs are added to the network, as described in Section 9.2.3, and the handover parameters are set to default values. Figure 70 shows the CDR to the left and the PPHR to the right, both measured per handover type. In the following, the CDR is considered for too late handover only, see Section 9.2.2.1. Even though TTT and hysteresis are fixed in this case, variations can be seen in the plots, probably due to statistical fluctuations. It can be seen that the CDR is very high for the HeNB-HeNB case. This is likely due to the high network load, combined with a high HeNB density, resulting in a high level of interference, particularly on the cell edge, where the handovers take place. Although the CDR for the HeNB-HeNB case would be lower for a lower network load, it is expected that it will still be relatively high. For the PPHR, the HeNB-HeNB case has a peak, but it is thought that it is a statistical

0 100 200 300 400 500 6000

0.05

0.1

0.15

0.2

0.25

0.3

Time (s)

HP

I ra

tio

Dropped call ratio

Ping-pong handover ratio

0 100 200 300 400 500 6000

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Time (s)

HP

I ra

tio

Dropped call ratio

Ping-pong handover ratio

Page 126: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 126 (135)

outlier. Taking that into account, it appears that ping-pong is only an issue for macro-macro handover, and that for the other cases ping-pong is very rare.

Figure 70: CDR and PPHR for the macro and HeNB scenario using default handover parameters

The same scenario is then run with the macro handover optimisation algorithm turned on. Figure 71 shows the results. For the macro-macro case, the CDR clearly decreases from around 40% to around 20%. It appears that the CDR also decreases for the HeNB-HeNB case, although this is less pronounced due to the statistical fluctuations for this case. For the macro-HeNB and HeNB-macro cases, there is no improvement in the CDR performance. This is most likely due to the fact that there are not many handovers of these types taking place, which is preventing the algorithm from doing an optimisation. Also here, peaks in the HeNB-HeNB HPPR are seen, that is thought to be statistical outliers. Further, ping-pong seems to be an issue only for macro-macro handover. As expected, the macro-macro PPHR increase over time, which is a consequence of the algorithm decreasing the hysteresis in order to decrease of the CDR.

Figure 71: CDR and PPHR with the macro handover optimisation algorithm turned on

To evaluate the integrated handover optimisation algorithm, a simulation is run using the same scenario, but with the integrated algorithm turned on. The results are shown in Figure 75. It can be seen that the integrated algorithm decreases the macro-macro and the HeNB-HeNB CDR, while the macro-macro PPHR increases. However, the results are very similar to the results when running the macro handover optimisation algorithm.

0 100 200 300 400 500 6000

10

20

30

40

50

60

70

80

Time (s)

Cal

l dro

p ra

tio (

%)

Macro-macro CDRMacro-HeNB CDR

HeNB-macro CDRHeNB-HeNB CDR

0 100 200 300 400 500 6000

2

4

6

8

10

12

Time (s)

Pin

g-po

ng h

ando

ver

ratio

(%

)

Macro-macro PPHRMacro-HeNB PPHRHeNB-macro PPHRHeNB-HeNB PPHR

0 100 200 300 400 500 6000

10

20

30

40

50

60

70

80

Time (s)

Cal

l dro

p ra

tio (

%)

Macro-macro CDRMacro-HeNB CDR

HeNB-macro CDRHeNB-HeNB CDR

0 100 200 300 400 500 6000

2

4

6

8

10

12

14

16

18

20

Time (s)

Pin

g-po

ng h

ando

ver

ratio

(%

)

Macro-macro PPHRMacro-HeNB PPHR

HeNB-macro PPHRHeNB-HeNB PPHR

Page 127: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 127 (135)

Figure 72: CDR and PPHR with the integrated handover optimisation algorithm turned on

Handover between HeNBs is particularly challenging for the considered scenario, with very high CDR. Self-optimisation does improve performance, but the CDR is still very high. To a large extent, this is due to the scenario. However, it does indicate that alternative approaches may be required to optimise handover between HeNBs. For the macro-macro case, CDR is improved at the expense of PPHR. To improve the CDR, the integrated handover optimisation algorithm either reduces hysteresis, or set CIO to a stronger negative value. As a result of this, PPHR increases. For the other handover cases, PPHR is negligible. This is in line with expectations, as the propagation conditions on the boundary of two macro cells are more conducive to ping-pong handovers. The results do not show any performance improvement of the integrated handover optimisation algorithm compared to the macro handover optimisation algorithm for the given scenario.

9.2.6 Conclusions

In the studied scenario, no major difference in the performance could be seen when running the macro handover parameter optimisation algorithm and the integrated handover parameter optimisation algorithm. As no performance gain can be seen, running the integrated handover parameter optimisation algorithm, separating handover parameter settings between macro and home eNodeB target cells, can not be justified for this scenario. However, an integrated macro and home eNodeB handover parameter optimisation could be beneficial in other scenarios. In further work, additional scenarios should also be considered, e.g., different macro cell sizes, different HeNB distribution, lower UE speed etc. in order to see if there are conditions where there is benefit from the integrated self-optimisation algorithm for the handover parameters. Also, adaptation of TTT values, in addition to the tuning of hysteresis and CIO, should be considered. Particularly for HeNB-HeNB handover, low TTT values may be beneficial. Although the results presented here show the potential of the SON algorithms, further quantification of the results would benefit from using longer simulations times, or from running the same simulation multiple times with different random seeds. This approach would also improve the statistical significance of the results.

9.3 Admission Control and Handover Parameter Optimisation

9.3.1 Description of the Use Case

In both the admission control optimisation use case and the handover parameter optimisation use case, optimisation algorithms have been developed (see Section 8.1 and Section 8.2). When deployed, these optimisation algorithms will have to operate in parallel in the network. Both the admission control and handover optimisation algorithms do not have common parameters. Furthermore, the admission control optimisation algorithm tries to facilitate handover calls. However, interaction that has a negative impact might occur when handover calls (which are subjected to admission control) to a certain cell are massively rejected by the admission control algorithm of that cell. This can occur when there is a—temporary—overload in the handover target cell. When a cell rejects a call that comes from a neighbouring cell, the serving cell will have to find a new cell for the call to handover to. If the call fails to find a new target in time, it will be dropped. If this happens with multiple calls, the handover optimisation algorithm of the serving cell will react while it cannot resolve the issue, because the higher number of call drops is not caused by suboptimal handover parameters.

The primary objective of the admission control optimisation and handover optimisation use case is the study of the interaction and need for integration between the standalone admission control and handover optimisation algorithms.

0 100 200 300 400 500 6000

10

20

30

40

50

60

70

80

Time (s)

Cal

l dro

p ra

tio (

%)

Macro-macro CDRMacro-HeNB CDR

HeNB-macro CDRHeNB-HeNB CDR

0 100 200 300 400 500 6000

2

4

6

8

10

12

14

16

18

Time (s)

Pin

g-po

ng h

ando

ver

ratio

(%

)

Macro-macro PPHRMacro-HeNB PPHR

HeNB-macro PPHRHeNB-HeNB PPHR

Page 128: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 128 (135)

9.3.2 KPIs and Control Parameters

The KPIs that are considered in the admission control and handover optimisation use case are the same as in the standalone admission control and handover optimisation use cases. For the admission control optimisation algorithm these are the rejection ratio of the fresh calls, the rejection ratio of the handover calls, the low call throughput ratio (the fraction of the calls with a low call throughput) and the traffic loss ratio. For the handover optimisation algorithm these are the call drop ratio, the handover failure ratio and the ping-pong handover ratio.

The control parameters that are considered in this use case are also the same as these of the standalone use cases, i.e., the ThHO parameter of the admission control algorithm that is auto-tuned by the admission control optimisation algorithm and the hysteresis and the time-to-trigger parameters of the handover procedure that are auto-tuned by the handover optimisation algorithm.

For more details about these KPIs and control parameters, see Section 8.1.2 and Section 8.3.2.

9.3.3 Scenarios

In order to study the interaction between the admission control optimisation and handover optimisation algorithms, the impact of a number of system changes that affect the input metrics of at least one of the algorithms is investigated. These changes include changes in user velocity and load.

User mobility has an impact on both the admission control and the handover optimisation algorithms. For example, when the UE velocity increases the handover parameters should be adequately modified (shorter time-to-trigger and lower hysteresis) to ensure that the handovers happen fast enough, otherwise the signal level may degrade too much and the calls may be dropped. The UE velocity may also affect the admission control parameter optimisation algorithm indirectly: when more calls are dropped, the load will be lower and there will be fewer handover calls and relatively more fresh calls.

The load of the target cell has an impact on whether a handover call can be accepted or will be rejected. In this sense, the traffic load has an impact on the admission control optimisation algorithm. For example, for the same number of handover calls, the higher the load at the target cell is, the lower the ThHO of this cell needs to be to avoid that handover calls will be rejected. From the perspective of the source cell, when the admission control algorithms of more target cells reject the handover call, it will happen more often that the signal degrades without the call being able to perform a handover. In such a case the call drop ratio of the source cell may increase.

In order to trigger the expected interaction, the load in a cell should be high and, at the same time, a considerable amount of handovers should take place. In order to be able to observe such situations, three different simulation scenarios were defined. In the first scenario, the initial UE speed is set to 3 km/h and the load is such that the rejection ratio of the fresh calls is 2% and the call drop ratio is 0.5%. Then, after 6300 seconds (1h45), the speed will be increased gradually, over the course of 30 minutes, from 3 km/h to 50 km/h. In the second scenario the initial UE speed and the load are the same as in the first scenario, only this time the load is increased such that the rejection ratio of the fresh calls increases from 2% to 20%. In the third scenario the changes of the first two scenarios are combined: the speed increases from 3 km/h to 50 km/h and the load changes such that the rejection ratio of the fresh calls increases from 2% to 20%. A summary of the scenarios is given in Table 21. Although interaction is not expected in all scenarios, also scenarios where no interaction occurs can be interesting since they can be seen as reference scenarios to better understand the interaction in the scenarios where interaction does take place.

Table 21: A summary of the considered scenarios

Scenario name Before change After change

Speed increase Low UE speed: 3 km/h High UE speed: 50 km/h

Low load: 2% rej. fresh

Load increase Low UE speed: 3 km/h

Low load: 2% rej. fresh High load: 20% rej. fresh

Speed and load increase Low UE speed: 3 km/h Low UE speed: 50 km/h

Low load: 2% rej. fresh High load: 20% rej.fresh

Page 129: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 129 (135)

9.3.4 Algorithmic Approach The scenarios that were defined in the previous section have been simulated (1) with neither of the optimisation algorithms enabled, (2) with only the admission control optimisation algorithm enabled, (3) with only the handover optimisation algorithm enabled and (4) with both optimisation algorithms enabled. The handover optimisation algorithm that was used was the HPI-Sum based handover optimisation algorithm (see Section 8.2).

The different runs have been evaluated by comparing the evolution of the different KPIs over time. Hereby the case in which none of the optimisation algorithms is enabled was used as a reference for comparing the three other cases in which either a single optimisation algorithm or both optimisation algorithms are enabled with. In the first place it has been checked whether the cases in which the optimisation algorithms are enabled achieve their goals in comparison with the case in which none of the optimisation algorithms are enabled: for the admission control optimisation algorithm this means that the rejection ratio of the handover calls needs to be lower at the possible expense of the fresh calls and for the handover optimisation algorithm that the weighted sum of the HPIs is smaller than in the reference case. For the case in which both optimisation algorithms are enabled it has also been checked whether both algorithms still achieve their goals. It will also be checked whether there is a difference between that case and the case in which only one of the optimisation algorithms is enabled. The differences between these cases will indicate interaction between both optimisation algorithms.

9.3.5 Results In the scenario where the speed increases, it can be seen that a higher speed results in a higher call drop ratio (Figure 73) and ping-pong handover ratio (Figure 74). The rise in the call drop ratio is because, when users move faster, the time frame wherein users can be served by both the serving and target eNodeB is smaller; this means that users have less time to perform handover which will lead to a higher number of call drops than when users travel at a lower pace. The reason for the higher number of ping-pong handovers can be explained by the fact that due to the faster speed users move faster through the patches where, due to shadowing fading, the RSRP of the source cell will be higher than that of the target cell and vice-versa. This will increase the possibility of a handover back to the original serving eNodeB within the ping-pong handover critical time. In case the handover SON algorithm is enabled, the call drop ratio will however be lower than in case it is not. The lower call drop ratio comes at the cost of a higher ping-pong handover ratio; this is a trade-off that is made by the handover SON algorithm.

Figure 73: The call drop ratio, plotted over time,

in the scenario where the speed increases

Figure 74: The ping-pong handover ratio,

plotted over time, in the scenario where the speed increases

Figure 75 shows the rejection ratio of the fresh calls. This metric is also influenced by the speed increase. Because more calls are dropped, the load will be lower such that the admission control algorithm can accept more calls.

Page 130: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 130 (135)

Figure 75: The rejection ratio of the fresh calls, plotted over time, in the scenario where the speed

increases

In the scenario where the load increases, Figure 76 shows that the fraction of handover calls that are rejected when the admission control optimisation algorithm enabled is lower than when the admission control optimisation algorithm is not enabled. The trade-off for this can be seen in Figure 77: the rejection ratio of the fresh calls is higher when the admission control optimisation algorithm is enabled than when it is not enabled.

Figure 76: The rejection ratio of the handover calls, plotted over time, in the scenario where

the load increases

Figure 77: The rejection ratio of the fresh calls, plotted over time, in the scenario where the load

increases

The change in load has not much influence on the call drop ratio or the ping-pong handover ratio as can be seen in Figure 78 and Figure 79: both the call drop ratio and the ping-pong handover ratio are approximately the same before and after the change. This means that neither the change in load nor the measures that are taken by the admission control optimisation algorithm as a reaction on the change in load influence the handover optimisation algorithm. Both the call drop ratio and the ping-pong handover ratio appear to be somewhat higher in the cases where the handover optimisation algorithm is enabled than in the cases where it is not. This is caused by the fact that the handover optimisation algorithm adapts the handover parameters in order to try to optimise the handover performance but fails because there is nearly no margin for improvement, worsening the handover performance in the process.

Page 131: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 131 (135)

Figure 78: The call drop ratio, plotted over time,

in the scenario where the load increases

Figure 79: The ping-pong handover ratio

plotted over time, in the scenario where the load increases

There are however occasions where the load change triggers a reaction of the handover optimisation algorithm. An example of this can be seen in Figure 80. After the load change, the ping-pong handover ratio starts to rise in the cases where the handover optimisation algorithm is enabled. This increase is caused by the handover optimisation algorithm trying to optimise the handover performance but failing therein. These occasions do not occur very often but do not appear to be coincidence either; the change in load sometimes seems to stimulate a reaction of the handover optimisation algorithm. A possible reason for the reaction of the handover optimisation algorithm can be seen in Figure 81 (black circles): when the load becomes higher, overload situations might occur which will cause the cell to—temporarily—reject more handover calls which, as expected, will cause a higher call drop ratio in the neighbouring cells. Since there is a higher probability that there will be an overload during the load change because the admission control optimisation algorithm will not yet have had the chance to adapt the ThHO parameter appropriately, this is more likely to happen during the load change than at another point in time. This effect is not observed very often and when it is observed it is not significant enough to cause the handover optimisation algorithm to react in a way that requires coordination because the admission control algorithm is able to resolve the overload situation fast enough.

Figure 80: The ping-pong handover ratio,

plotted over time, in the scenarios where the load increases. The ping-pong handover ratio

starts to rise after the change in some occasions

Figure 81: The evolution of the rejection ratio of

the fresh and handover calls in a cell and the call drop ratio in a neighbouring cell, in the scenario where the load increases with both

optimisation algorithms enabled

In the scenario, where both the speed and the load increase, the positive influence of the admission control optimisation algorithm on the handover performance can be seen. As shown in Figure 82 the call drop ratio is noticeable lower in case the admission control optimisation algorithm is enabled than in the corresponding cases without admission control optimisation. The ping-pong handover ratio is however the same (see Figure 83). The reason for this is that because the admission control optimisation algorithm ensures that handover calls are likely to be accepted, less handover calls are dropped because they cannot handover to a target cell in time; which is effectively what the admission control (optimisation) algorithm is for.

Page 132: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 132 (135)

Figure 82: The call drop ratio, plotted over time,

in case both the speed and the load increase

Figure 83: The ping-pong handover ratio,

plotted over time, in case both the speed and the load increase

The admission control-related metrics behave the same as in the scenario where the load changes as can be seen in Figure 84 and Figure 85: the rejection ratio of the handover calls will remain low after the change in the cases where the admission control optimisation algorithm is enabled compared to the cases in which the admission control optimisation algorithm is not enabled. A small increase during the change can, however, be observed that triggers the reaction of the admission control optimisation algorithm; as a consequence the rejection ratio of the fresh calls will be higher.

Figure 84: The rejection ratio of the handover calls, plotted over time, in the scenario where

both the speed and load increase

Figure 85: The rejection ratio of the fresh calls,

plotted over time, in the scenario where both the speed and the load increases

9.3.6 Conclusions The presented results show that there is clearly a beneficial influence of the admission control SON algorithm on the handover performance. The results however do not show much negative interaction between the admission control optimisation and handover optimisation that leads to bad performance. There is thus no significant interaction between both optimisation algorithms that requires the development of an integrated optimisation algorithm that coordinates the actions of both the admission control and handover optimisation algorithms.

Page 133: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 133 (135)

References [1] COST 231 Final Report, available at http://www.lx.it.pt/cost231/. [2] http://www.lx.it.pt/cost259/. [3] T. Kürner (Editor), A. Eisenblätter, H.F. Geerdes, D. Junglas, T. Koch, and A. Martin, “Final Report

on Automatic Planning and Optimisation”, Technical Report IST-2000-28088-MOMENTUM-D47-PUB, public deliverable D4.7 of IST-MOMENTUM, 2003, http://momentum.zib.de.

[4] M. Nawrocki et. al (eds.), “Understanding UMTS Radio Network Modelling, Planning and Automated Optimisation - Theory and Practice”, Wiley, 2006.

[5] A. Hecker and T. Kürner, “Introduction of Measurement-Based Estimation of Handover Attempts for Automatic Planning of Mobile Radio Networks”, Proc. IEEE International Conference on Communications (ICC '09), Dresden, Germany, pp. 1-5, June 2009.

[6] http://www.cost2100.org. [7] A. Hecker and T. Kürner, “Merging the Hierarchical Information of Performance Statistics with the

Site Data Base of a Planning Tool”, COST 2100 TD (09)984, Vienna, Austria, September, 2009. [8] D. Niyato and E. Hossain, "Call admission control for QoS provisioning in 4G wireless networks:

issues and approaches”, IEEE Network, September/October 2005. [9] SOCRATES Deliverable D4.1A: Self-configuration in Wireless Access Networks, EU STREP

SOCRATES (INFSO-ICT-216284), November 2009. [10] SOCRATES Deliverable D4.2A: Self-healing in Wireless Access Networks, EU STREP

SOCRATES (INFSO-ICT-216284), November 2009. [11] SOCRATES Deliverable D2.1: Use Cases for Self-Organising Networks, EU STREP SOCRATES

(INFSO-ICT-216284), March 2008. [12] S. R. Saunders et. al, “Femto cells”, Wiley 2010. [13] NGMN Use Cases related to Self Organising Network, Overall Description, May 2007,

http://www.ngmn.org. [14] Next Generation Mobile Networks Recommendation on SON and Q&M requirements, December

2008, http://www.ngmn.org. [15] 3GPP TS 36.331, “Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Resource Control

(RRC); Protocol specification (Release 9)”, Version 9.3.0, June 2010. [16] S. Davy, B. Jennings, and J. Strassner, “The policy continuum – Policy authoring and conflict

analysis”, ACM Computer Communications, Vol. 31, Issue 13, pp. 2981 – 2995, Butterworth-Heinemann, August 2008.

[17] 3GPP TR 25.814, “Physical Layer Aspects for evolved Universal Terrestrial Radio Access (UTRA)”, Version 7.1.0, October 2006.

[18] T. Jansen, I. Balan, I.Moerman, J. Turk and T. Kürner, “Handover parameter optimisation in LTE self-organizing networks”, Proc. IEEE Vehicular Technology Confereence VTC 2010 Fall, Ottawa/Canada, September 2010.

[19] A. Lobinger, S. Stefanski, T. Jansen, and I. Balan, “Load balancing in downlink LTE self-optimizing networks”, Proc. IEEE Vehicular Technology Conference (VTC2010-Spring), Taipei, Taiwan, May 2010.

[20] SOCRATES Deliverable D2.2: Requirements for Self-Organising Networks, EU STREP SOCRATES (INFSO-ICT-216284), June 2008.

[21] SOCRATES Deliverable D2.4: Framework for the Development of Self-Organising Methods, EU STREP SOCRATES (INFSO-ICT-216284), September 2008.

[22] 3GPP TS 36.305, “Universal Terrestrial Radio Access (E-UTRA); Stage 2 functional specification of User Equipment (UE) positioning in E-UTRAN (Release 9)”, Version 1.1.0, September 2009.

[23] C. Fritsche and A. Klein, “Cramér-Rao Lower Bounds for Hybrid Localization of Mobile Terminals”, 5th Workshop on Positioning, Navigation and Communication (WPNC ‘08), March 2008.

[24] 3GPP TS36.133, “Evolved Universal Terrestrial Radio Access (E-UTRA); Requirements for support of radio resource management (Release 9)”, Version 9.3.0, March 2010.

[25] J. Medbo, I. Siomina, A. Kangas, and J. Furuskog, “Propagation channel impact on LTE positioning accuracy: A study based on real measurements of observed time difference of arrival”, Proc. IEEE 20th Int. Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC ‘09), pp. 2213-2217, September 13-16, 2009.

[26] 3GPP TS36.331: Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Resource Control (RRC); Protocol specification (Release 9); Version 9.2.0, March 2010.

[27] S. Ahonen, J. Latheenmaki, H. Leitinen, and S. Horsmanheimo, “Usage of mobile location techniques for UMTS network planning in urban environment”, IST Mobile & Wireless Telecommunications Summit, 2002.

Page 134: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 134 (135)

[28] SOCRATES Deliverable D2.6: Review of use cases and Framework II, EU STREP SOCRATES (INFSO-ICT-216284), December 2009.

[29] R. Argawal, A. Bedekar, R. La, and V. Subramanian, “Class and channel condition based weighted proportional fair scheduler”, Teletraffic Engineering in the Internet Era, Proceedings ITC-17, pp. 553–565, 2001.

[30] M. C. Necker, “A comparison of scheduling mechanisms for service class differentiation in HSDPA networks”, International Journal of Electronics and Communications, Vol. 60, pp. 136–141, 2006.

[31] K. Elsayed and A. Khattab, “Channel-aware earliest deadline due fair scheduling for wireless multimedia networks”, Wireless Personal Communications, Vol. 38, pp. 233–252, 2008.

[32] A. Khattab and K. Elsayed, “Channel-quality dependent earliest deadline due fair scheduling schemes for wireless multimedia networks”, Proc. 7th ACM International Symposium on Modelling, Analysis and Simulation of Wireless Mobile Systems 2004, pp. 31–38, 2004.

[33] 3GPP TS 36.942, “Universal terrestrial radio access (E-UTRA); radio frequency (RF) system scenarios”, 2009.

[34] K. Spaey, B. Sas, and C. Blondia, “Self-optimising call admission control for LTE downlink”, COST 2100 TD(10)10056, Joint Workshop COST 2100 SWG 3.1 & FP7-ICT SOCRATES, Athens, Greece, February 2010.

[35] I. Viering, M. Döttling, and A. Lobinger, “A mathematical perspective of self-optimizing wireless networks”, in Proc. Of IEEE International Conference on Communications 2009, Dresden, Germany, June 2009.

[36] 3GPP TR 36.902, “Self-configuring and self-optimizing network use cases and solutions”, Version 1.0.1, 2008.

[37] 3GPP TS 32.541, “Self-Healing OAM; Concepts and Requirements”, Version 1.2.0, 2010. [38] M. Amirijoo, L. Jorguseski, T. Kürner, R. Litjens, M. Neuland, L. C. Schmelz, and U. Türke, “Cell

Outage Management in LTE Networks”, Proc. ISWCS ’09, Siena, Italy, 2009. [39] M. Amirijoo, L. Jorguseski, R. Litjens, and R. Nascimento, “Effectiveness of Cell Outage

Compensation in LTE Networks”, Proc. Wireless Consumer Communication and Networking Conference (CCNC), 2011.

[40] 3GPP TR 36.814, “Further advancements for E-UTRA – Physical layer aspects”, Version 1.0.1, 2009.

[41] 3GPP TS 36.213, “Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer procedures (Release 8)”, Version 8.5.0, December 2008.

[42] 3GPP TS 36.423, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); X2 application protocol (X2AP) (Release 8)”, Version 8.5.0, March 2009.

[43] 3GPP TS 36.413, “Evolved Universal Terrestrial Radio Access Network (E-UTRAN); S1 Application Protocol (S1AP) (Release 8)”, Version 8.5.1, March 2009.

[44] 3GPP S5-080800, “High level requirements for eUTRAN performance counters”, Meeting SA5#59, 21-25 April, Chengdu, China.

[45] 3GPP TS 36.304, “Evolved Universal Terrestrial Radio Access (E-UTRA); User Equipment (UE) procedures in idle mode (Release 8)”, Version 8.4.0, December 2008.

[46] H. Claussen, L. T. W. Ho, and L. G. Samuel, “Self-optimisation of coverage for femtocell deployments”, Wireless Telecommunication Symposium (WTS) 2008, April 2008.

[47] 3GPP TR 25.913, “Requirements for Evolved UTRA (E-UTRA) and Evolved UTRAN (E-UTRAN)”, Version 9.0.0, 2009.

[48] 3GPP TS 36.942, “Evolved Universal Terrestrial Radio Access (E-UTRA); Radio Frequency (RF) System Scenarios”, Version 8.2.0, May 2009.

[49] SOCRATES Deliverable D2.3: Assessment Criteria for Self-organising Networks, EU STREP SOCRATES (INFSO-ICT-216284), July 2008.

[50] J. Turkka and A. Lobinger, “Non-regular Layout for Cellular Network System Simulations”, Proc. International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC 2010), Istanbul, Turkey, September 2010.

[51] SOCRATES Deliverable D2.5: Review of use cases and Framework, EU STREP SOCRATES (INFSO-ICT-216284), March 2009.

[52] SOCRATES Deliverable D5.8: Final Report on Dissemination and Standardisation Activities including Report on 2nd Workshop and Demonstration Results, EU STREP SOCRATES (INFSO-ICT-216284), December 2010.

[53] SOCRATES Deliverable D3.2: Self-optimisation methods for multiple (interacting) functionalities in wireless access networks, EU STREP SOCRATES (INFSO-ICT-216284), October 2010.

[54] SOCRATES Deliverable D3.1A: Self-optimisation methods for stand-alone functionalities in wireless access networks, EU STREP SOCRATES (INFSO-ICT-216284), October 2010.

Page 135: INFSO-ICT-216284 SOCRATES D5.9 Final Report on … Final report...Final Report on Self-Organisation and its Implications in Wireless Access Networks ... NSN-D – NSN-PL Workpackage:

SOCRATES D5.9

Page 135 (135)

[55] Yankee Group Research: http://www.netevents.org/events/binaries/Malaysia2008/Yankee%20Carrier%20Outsourcing%20Mendler.ppt

[56] Analysys Mason: http://www.analysysmason.com/About-Us/News/Insight/Wireless-infrastructure-sharing-saves-operators-capex-and-opex/

[57] Pyramid Research: Demystifying Opex & Capex Budgets; Feedback from Operator Network Managers, Author: Ozgur Aytar 2007

[58] SOCRATES Deliverable D5.10: Measurements, Architecture and Interfaces for Self-organising Networks, EU STREP SOCRATES (INFSO-ICT-216284), October 2010.