final test and evaluation report - agh university of...

86
Moby Dick WP5 1 / 86 IST-2000-25394 Project Moby Dick D0504 Final Test and Evaluation Report Contractual Date of Delivery to the CEC: 31 December 2003 Actual Date of Delivery to the CEC: 16 January 2004 Author(s): partners in WP5 (cf. page 3) Participant(s): partners in WP5 Workpackage: WP5 Security: Public Nature: Report Version: 1.0 Total number of pages: 86 Abstract: This document describes the final tests done to the definite Moby Dick implementation. User tests with real applications evaluate the measure the project meet its goals and expert tests evaluate the performance of Moby Dick. Keyword list: final, field trial, validation, testing, test-bed.

Upload: others

Post on 09-Feb-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

Moby Dick WP5 1 / 86

IST-2000-25394 Project Moby Dick

D0504

Final Test and Evaluation Report

Contractual Date of Delivery to the CEC: 31 December 2003 Actual Date of Delivery to the CEC: 16 January 2004 Author(s): partners in WP5 (cf. page 3) Participant(s): partners in WP5 Workpackage: WP5 Security: Public Nature: Report Version: 1.0 Total number of pages: 86 Abstract: This document describes the final tests done to the definite Moby Dick implementation. User tests with real applications evaluate the measure the project meet its goals and expert tests evaluate the performance of Moby Dick. Keyword list: final, field trial, validation, testing, test-bed.

Page 2: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 2 / 86

Executive Summary This document describes the final tests done to the definite Moby Dick implementation. User tests with real applications evaluate the measure the project meet its goals and expert tests evaluate the performance of Moby Dick.

The test beds are presented, then the expert tests aimed to evaluate Moby Dick performance are described and then the results are gathered and analysed showing clearly how Moby Dick works and giving advices to 4G networks. The user tests are also presented. These tests face users employing real application with Moby Dick infrastructure, making them experience all Moby Dick functionality and asking them their advice, thus knowing if the Moby Dick objectives are fulfilled. As an annex, different public Moby Dick demonstrations are described.

Page 3: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 3 / 86

Authors

Partner Name Phone / Fax / E-mail

T-Systems Hans J. Einsiedler Phone: +49 30 3497 3518 Fax: +49 30 3497 3519 E-mail: [email protected]

NEC M. Liebsch Phone: +49 (0) 62-21–90511–44 Fax: +49 (0) 62-21–90511–45 E-mail: [email protected] Ralf Schmitz Phone: +49-6221-13 70 8-12 Fax: +49-6221-13 70 8-28 E-mail: [email protected] Telemaco Melia Phone: +49 (0) 62-21–90511–44 Fax: +49 (0) 62-21–90511–45 E-mail: [email protected]

UC3M Antonio Cuevas Phone: +34 916248838 Fax: +34 916248749 E-mail: [email protected] Carlos García Phone: +34 916248802. Fax: +34 916248749 E-mail: [email protected] Carlos J. Bernardos Phone: +34 916248756 Fax: +34 916248749 E-mail: [email protected] Pablo Serrano-Yañez Phone: +34 916248756 Fax: +34 916248749 E-mail: [email protected] Jose I. Moreno Phone: +34 916249183 Fax: +34 916248749 E-mail: [email protected]

USTUTT Jürgen Jähnert Phone: +49-711-685-4273 Fax: +49-711-678-8363 E-mail: [email protected] Jie Zhou Phone : +49.711.6855531 E-mail : [email protected] Hyung-Woo Kim Phone : +49.711.6854514 E-mail : [email protected]

GMD / FhG Davinder Pal Singh Phone: +49-30-3463-7175 Fax: +49-30-3463-8175 E-mail: [email protected] Cristian Constantin Phone : +49 30 3463 7146 E-mail : [email protected]

Page 4: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 4 / 86

PTIN Victor Marques Phone: +351.234.403311 Fax: +351.234.420722 E-mail: [email protected] Rui Aguiar Phone: +35-1-234-381937 Fax: +351.234.381941 E-mail: [email protected] Diogo Gomez Phone: +35-1-234-381937 Fax: +351.234.381941 E-mail: [email protected]

MOTOROLA Christophe Beaujean Phone: +33-1-69354812 Fax: +33-1-69352501 E-mail: [email protected]

EURECOM Michelle Wetterwald Phone: +33-493-002631 Fax: +33-493-002627 E-mail: [email protected]

UKR Piotr Pacyna Phone: +48-12-6174040 Fax: +48-12-6342372 E-mail: [email protected] Janusz Gozdecki Phone: +48-12-6173599 Fax: +48-12-6342372 E-mail: [email protected]

ETHZ Hasan Phone: +41 1 632 7012 Fax: +41 1 632 1035 Email: [email protected] Pascal Kurtansky Phone: +41 1 632 7012 Fax: +41 1 632 1035 Email: [email protected]

I2R Parijat Mishra E-mail: [email protected]

Page 5: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 5 / 86

Table of Contents

AUTHORS..........................................................................................................3

ABBREVIATIONS ..............................................................................................9

1. INTRODUCTION .....................................................................................11

2. TESBEDS USED FOR THE TESTS........................................................12

2.1 Test Bed in Madrid ................................................................................................................. 12

2.2 Test Bed in Stuttgart............................................................................................................... 13

2.3 Other test beds ........................................................................................................................ 14 2.3.1 TD_CDMA integration Testbed in Eurecom ...................................................................... 14 2.3.2 AAA Testbed in Fokus ...................................................................................................... 16 2.3.3 Logging & auditing testbed in ETH ................................................................................... 16 2.3.4 WCDMA+ DiffServ testbed in Motorola ........................................................................... 17 2.3.5 Overall test bed in PTIN.................................................................................................... 18

3. EXPERT TESTS DESCRIPTION AND METHODOLOGY ......................19

3.1 Introduction ............................................................................................................................ 19

3.2 Log Transfer Delay ................................................................................................................. 19 3.2.1 Test Description ................................................................................................................ 19 3.2.2 Measurement Parameters................................................................................................... 19 3.2.3 Measurement Process and Configuration............................................................................ 20

3.3 Auditing Speed........................................................................................................................ 21 3.3.1 Test Description and Measurement Parameters .................................................................. 22 3.3.2 Measurement Process and Configuration............................................................................ 22

3.4 Charging performance............................................................................................................ 23 3.4.1 Test description ................................................................................................................. 23 3.4.2 Measurement parameters................................................................................................... 23 3.4.3 Test realization, measurement process, interactions required. ............................................. 23

3.5 AAA Scalability tests............................................................................................................... 23 3.5.1 Tests description and motivation........................................................................................ 23 3.5.2 Variables to measure ......................................................................................................... 24 3.5.3 Measurement process, test realization ................................................................................ 24

3.6 DSCP Marking Software (DMS) ............................................................................................ 25 3.6.1 Filter loading..................................................................................................................... 25

3.6.1.1 Test description ............................................................................................................. 25 3.6.1.2 Measurement parameter................................................................................................. 25 3.6.1.3 Test realization .............................................................................................................. 25

3.6.2 DSCP dumping from AAA................................................................................................ 25 3.6.2.1 Test description ............................................................................................................. 25 3.6.2.2 Measurement parameter................................................................................................. 25 3.6.2.3 Test realization .............................................................................................................. 26

Page 6: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 6 / 86

3.6.3 DMS and IPsec ................................................................................................................. 26 3.6.3.1 Test description ............................................................................................................. 26 3.6.3.2 Test realization .............................................................................................................. 26

3.7 QoS entities communication delays ........................................................................................ 26 3.7.1 Test description and purpose:............................................................................................. 26 3.7.2 Variables to measure Measurement parameters: ................................................................. 26 3.7.3 Measurement process, test realization Interactions required:............................................... 27

3.8 QoS context installation time in the QoSM in the nAR.......................................................... 27 3.8.1 Tests purpose and description ............................................................................................ 27 3.8.2 Variables to measure ......................................................................................................... 27 3.8.3 Measurement process, test realization ................................................................................ 27

3.9 FHO......................................................................................................................................... 27 3.9.1 Test description ................................................................................................................. 27 3.9.2 Measurement parameters................................................................................................... 28

3.10 Paging...................................................................................................................................... 29 3.10.1 Test description ................................................................................................................. 29 3.10.2 Measurement parameters................................................................................................... 29 3.10.3 Test realization.................................................................................................................. 29

3.11 Inter-domain Handover .......................................................................................................... 30 3.11.1 Test description ................................................................................................................. 30 3.11.2 Measurement parameters................................................................................................... 30 3.11.3 Measurement process ........................................................................................................ 30

4. EXPERT TESTS REALIZATION, RESULTS, ANALYSIS AND ASSESSMENT .................................................................................................30

4.1 Introduction ............................................................................................................................ 30

4.2 Log Transfer Delay ................................................................................................................. 30 4.2.1 Message Transfer Time ..................................................................................................... 30 4.2.2 Logs Transfer Time........................................................................................................... 33 4.2.3 Conclusions....................................................................................................................... 38

4.3 Auditing Speed........................................................................................................................ 38 4.3.1 Entity Availability............................................................................................................. 38 4.3.2 User Registration............................................................................................................... 40 4.3.3 Service Authorization........................................................................................................ 43 4.3.4 Conclusions....................................................................................................................... 44

4.4 AAA Scalability tests............................................................................................................... 44 4.4.1 AAA Scalability tests at FhG FOKUS................................................................................ 44

4.4.1.1 Scalability tests done in Madrid and Stuttgart................................................................. 45 4.4.2 Analysis of results ............................................................................................................. 46

4.5 Charging performance............................................................................................................ 46 4.5.1 Results .............................................................................................................................. 46 4.5.2 Analysis of results ............................................................................................................. 47

4.6 DSCP Marking Software (DMS) ............................................................................................ 47 4.6.1 Filter loading..................................................................................................................... 47 4.6.2 DSCP dumping from AAA................................................................................................ 47 4.6.3 DMS and IPsec ................................................................................................................. 48

4.7 QoS entities communication delays ........................................................................................ 49 4.7.1 Results .............................................................................................................................. 49

Page 7: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 7 / 86

4.7.2 Analysis of results ............................................................................................................. 49

4.8 QoS context installation time in the QoSM in the nAR.......................................................... 50 4.8.1 Results .............................................................................................................................. 50 4.8.2 Analysis of results ............................................................................................................. 51

4.9 FHO......................................................................................................................................... 51 4.9.1 Testbed in Madrid ............................................................................................................. 51

4.9.1.1 Results .......................................................................................................................... 51 4.9.1.2 Analysis of results. ........................................................................................................ 52

4.9.2 Testbed in Stuttgart ........................................................................................................... 53 4.9.2.1 Results .......................................................................................................................... 53 4.9.2.2 Analysis of results ......................................................................................................... 54

4.10 Paging...................................................................................................................................... 56 4.10.1 Testbed in Madrid ............................................................................................................. 56

4.10.1.1 Test results ................................................................................................................ 56 4.10.1.2 Analysis of test results ............................................................................................... 56

4.10.2 Testbed in Stuttgart ........................................................................................................... 61 4.10.2.1 Test results ................................................................................................................ 61 4.10.2.2 Analysis of test results ............................................................................................... 61

4.11 Inter-domain Handover .......................................................................................................... 64 4.11.1 Results .............................................................................................................................. 64 4.11.2 Analysis of results ............................................................................................................. 65

5. USER TESTS ..........................................................................................65

5.1 VoIP ........................................................................................................................................ 65 5.1.1 Description of the test........................................................................................................ 65 5.1.2 Realization ........................................................................................................................ 65 5.1.3 Results .............................................................................................................................. 67 5.1.4 Expert evaluation: Characterization of RAT traffic pattern ................................................. 67

5.2 Video Streaming...................................................................................................................... 67 5.2.1 Description........................................................................................................................ 67 5.2.2 Test Realization................................................................................................................. 68 5.2.3 Results .............................................................................................................................. 69

5.3 Internet radio.......................................................................................................................... 69 5.3.1 Description........................................................................................................................ 69 5.3.2 Realization ........................................................................................................................ 70 5.3.3 Results .............................................................................................................................. 71 5.3.4 Expert evaluation: TCP during FHO .................................................................................. 74

5.4 Internet coffee ......................................................................................................................... 74 5.4.1 Results .............................................................................................................................. 75

5.5 Quake 2 championship............................................................................................................ 75 5.5.1 Results .............................................................................................................................. 76

6. CONCLUSIONS ......................................................................................77

7. REFERENCES ........................................................................................78

8. ANNEX: PUBLIC DEMONSTRATIONS..................................................79

8.1 Mobile Summit in Aveiro, Moby Dick Demonstration........................................................... 79

Page 8: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 8 / 86

8.2 Moby Dick Summit in Stuttgart ............................................................................................. 80

8.3 Moby Dick workshop in Singapore ........................................................................................ 85

8.4 Demonstration to high school students at UC3M................................................................... 86

Page 9: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 9 / 86

Abbreviations

Abbreviation Full Name 3GPP Third Generation Partnership Project AAA Authentication, Authorisation and Accounting AAA.f Foreign AAA server AAA.h Home AAA server AAAC Authentication, Authorisation, Accounting and Charging AD Administrative Domain AF Assured Forwarding per-hop-behaviour AG Access Gateway AM Acknowledge Mode (of RLC) AN Access Network API Application Programming Interface AR Access Router AS Access Stratum ASM Application Specific Module ASN.1 Abstract Syntax Notation B3G Beyond Third Generation BACK Binding Acknowledgement BU Binding Update CN Corresponding Node CoA Care-of Address COPS Common Open Policy Service protocol COPS-ODRA COPS - Outsourcing Diffserv Resource Allocation CPU Computer Processing Unit CVS Concurrent Version System DB Database system DFN Deutsches Forschungsnetz DiffServ Differentiated Services DNS Domain Name System DoCoMo DSCP Differentiated Services Code Point EF Expedited Forwarding per-hop-behaviour ESSID Extended Service Set Identifier EUI End user identifier EUI-64 End User Identifier Euro6IX FBACK Fast handover Binding Acknowledgement FBU Fast handover Binding Update FIFA GRAAL Generic Radio Access Adaptation Layer HA Home Agent HMIPv6 Hierarchical Mobile IP version 6 HTTP HyperText Trasfer Protocol HTTP(S) Secure HyperText Trasfer Protocol ICMP Internet Control Message Protocol IETF Internet Engineering Task Force IP Internet Protocol LAN Local Area Network MAC Media-access control layer MAC Medium Access Control MAQ Mobility-AAA-QoS sequence MIND MIP Mobile IP MIPv6 Mobile IP version 6 MN Mobile Node MT Mobile Terminal MT Mobile Terminal MTNM Mobile Terminal Networking Manager nAR New Access Router NAS Network access stratum NCP Networking Control Panel oAR Old Access Router OVERDRIVE PA Paging Agent PDCP Packet Data Convergence Protocol PDCP Packet Data Convergence Protocol PEP Policy enforcement point

Page 10: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 10 / 86

PHY Physical layer in terms of ISO/OSI reference model PHY Physical LayerRG QoS Quality of Service QoSB Quality of Server Broker including a policy server RAN Radio Access Network RedIRIS+A95 Informatic Resources Interconnection Network REQ Request message RG Radio Gateway RLC Radio link control layer RLC Radio Link Control RR(subnet) Radio Router for a specific sub network RRC Radio Resource Control RT Linux Real-time Linux SLA Service Level Agreement SLS Service Level Specification SNMP Simple Network Management Protocol SP Service Provider Ssh Secure shell application TTI Transmission Time Interval UPS User profile subset UPS User Profile Specification URP User Registration Process USA United States of America VAT VIC VoIP Voice over IP W-CDMA Wideband Code Division Multiple Access WCDMA Wideband Code Division Multiple Access WLAN Wireless LAN

Page 11: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 11 / 86

1. Introduction This deliverable is a report on the final trials done in the Moby Dick project. This deliverable is a clear follow up of D0503 (see [12]) where parts of Moby dick architecture were evaluated and feedback given to different work packages to head them to the final implementation of Moby Dick architecture which is evaluated here. All the Moby Dick architecture being now implemented, its evaluation, described in this document will allow to answer to the following fundamental questions: Is the Moby Dick implementation fulfilling the planed goals? What is the performance of Moby Dick Architecture? The deliverable also aims to give valuable advice to future 4G networks based on Moby Dick successful experience and describe the Moby Dick public demonstrations done since the delivery of d0503 till now. To answer to these questions this deliverable is structured as follows. First the test beds are presented, then the expert tests aimed to evaluate Moby Dick performance are described and then the results are gathered and analysed. The user tests are presented in the next section. These tests face users employing real application with Moby Dick infrastructure, making them experience all Moby Dick functionality and asking them their advice, thus knowing if the Moby Dick objectives are fulfilled. Finally the conclusions are gathered. As an annex, different public Moby Dick demonstrations are described.

Page 12: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 12 / 86

2. Tesbeds used for the tests

2.1 Test Bed in Madrid Madrid’s test bed has already been described in other deliverables such as D0503, see [5]. Small changes in the test bed architecture were made in Madrid. We had to define a clear core network and all the access routers must have one access interface and one core interface. Further one machine was added in Madrid test bed to be able to physically separate the Paging Agent and the Home Agent. Also to have better testing capabilities a tuneable network emulator was installed in the border of UC3M network to tailor the connection with Stuttgart. Changes in topology are:

? Termita eth1 attached to 2001:0720:410:1003 sub-network ? Larva10 and larva9 Ethernet connection moved to sub network 2001:0720:410:1004

Three machines were added:

? Viudanegra. viudanegra is an AR to the whole Madrid’s Moby Dick network. It lays between grillo, and 2001:0720:410:1003 sub-network It has two Ethernet interfaces, one attached to 2001:0720:410:1003 sub-network and the other to grillo

? Larva1: An IPv6-in-IPv4 tunnel is established between larva1 and viudanegra due to the fact that NistNET emulator (used to tailor the connection) runs only in IPv4.

? Aranya, it assumes the function of the AAAAC server who was before in escarabajo. The PA is installed there. It is connected to the 2001:0720:410:1003 sub-network

The final test bed is:

IPv6-WLAN

•Home Agent•QoS Broker

IPv6-WLAN2

termita

GEANT

MOBYDICKDOMAIN

escarabajo

pulga

escorpion

2001:0720:0410:1004::/64

2001:0720:0410:1003::/642001:0720:0410:1001::/64

eth1

eth1

eth2

2001:0720:0410:1006::/64

2001:0720:0410:1007::/64

wlan0

eth2

eth1

wlan0

eth1

eth1

eth1

wlan0

coleoptero

WLANwlan0

grilloCISCO 7500

CN

MN

eth0

cucaracha

larva1

aranya

•Paging Agent•AAAAC server

wlan0

WL

AN

chinche

MN

MNgarrapata

WL

AN

MN

larva10larva9

WL

AN

MN

viudanegra

IPv6-in-IPv4tunnel

Figure 1. Madrid testbed

Page 13: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 13 / 86

2.2 Test Bed in Stuttgart Some changes have been made in the Stuttgart testbed since last deliverables D0503 (see [2]). We enhanced the testbed to have two domains and allow testing Handovers between two domains (Inter-domain handover). These 2 domains are marked in the figure below with different colors.

RUSProjectsCommunication SystemsBeWü Development RUS Oct. 17. 2003 1

MOBY DICK

ksat30ksat13

kssun10

ksat10

ksat54

ksat49 ksat73

ksat58

AR3 AR4

6net

RUS Testbed

PA

DNS/AR

eth0

eth1

HAAR1

ksat66

wlan0

wlan0

MT

2001:638:202:1::/64

wlan

ksat52

ksat42

-AAAs

-Charging -Log

QoS Brksat48

ksat46

CN

eth0

AR2

wlan0

escarabajoHA

aranyaAAAS

2001:638:202:22::/64

MTCN

ksat31

QoSB

ksat24

AAAs

ksat70AR

(no-MD)

2001:638:202:11::/64

2001:638:202:111::/64

ksat81

AR10

wlan0

eth0 eth0

eth0

Domain-II: ipv6.ks.uni-stuttgart.de

Domain-I: ipv6.rus.uni-stuttgart.de

TD_CDMA

RG

ksat51

ksat67

W_MT

ksat35

HA

2001:638:202:15::/64 2001:638:202:16::/64

2001:638:202:23::/64

2001:638:202:117::/64

Figure 2. Stuttgart testbed

The main enhancement in Stuttgart testbed is TD-CDMA testbed which is not appeared in Madrid testbed. The TD-CDMA testbed contains one Access router (ksat73), one Radio Gateway (ksat51), which connects to an antenna. The Mobile Terminal ksat67 has a TD-CDMA interface, which can connect to Radio Gateway. The core network of domain I is 2001:638:202:11::/64 with the domain name ipv6.rus.uni-stuttgart.de. There are still three other access router in this domain which are ksat10 (Ethernet), ksat48 and ksat49 (WLAN). So this domain contains three different access technologies, i.e. Ethernet, 802.11 Wireless LAN and TD-CDMA. Other components in this domain are listed as following: Ksat13 Home Agent Ksat30 Paging Agent Ksat42 QoS Broker Ksat52 AAAC Server Ksat46, ksat66 Corresponding Node (application server) Ksat54, ksat58 Mobile Terminal In order to present public demonstration (see Annex) during the final review in another building we extended our testbed (see Figure 2) by using VLAN to create a core network 2001:638:202:11::/64 in the demo building. This testbed contains the following hosts: Ksat77, ksat78 WLAN Access Router Ksat79 Ethernet Access Router Ksat80 TD-CDMA Access Router Ksat74 Radio Gateway Ksat68 TD-CDMA Mobile Terminal

Page 14: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 14 / 86

RUSProjects Communication SystemsBeWü Development RUS Oct. 17. 2003 4

MOBY DICKTestbed in ETI (Pub demo)

ksat79 ksat78 ksat80

AR7 AR9

eth0

eth1

AR8

wlan0

ksat77 eth0

AR6

wlan0

eth0 eth0

2001:638:202:11::/64

eth1

WCDMA

RG(new)

ksat74

VLAN

ksat56

wlan0

MT

ksat68

W_MT

2001:638:202:20::/64 2001:638:202:18::/64

2001:638:202:19::/64

2001:638:202:21::/64

Figure 3: Public Demo Testbed

The domain II has the domain name ipv6.ks.uni-stuttgart.de, its core network is 2001:638:202:111::/64. We distinguish different domain according to the first 56 bit of the IPv6 address. There is only one wlan access router (ksat81) in this domain. But it has also its Home agent (ksat35), QoS Broker (ksat31) and AAAC server (ksat24). With this setting we can perform inter-domain handover between these two domains. The Stuttgart testbed connects to Madrid testbed through 6net. The router on the border is kssun10. It has a tunnel to 6net. Our IPv6 DNS is running also on this machine. All the machines in domain I are reachable from Madrid site.

2.3 Other test beds

2.3.1 TD_CDMA integration Testbed in Eurecom This testbed has been build for the integration meeting held in Sophia Antipolis in March 2003 and used intensively later to complete the TD-CDMA integration. It contains the full Moby Dick architecture, including Mobility (Home Agent and Paging Agent), AAAC server, and QoS (QoS Manager and QoS Broker). To demonstrate handover, it implements two access technologies : Ethernet and TD-CDMA.

Page 15: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 15 / 86

UMTS Switch(24 ports)

HA + AAA-C+ QoS B

(HECKEL)

Radio GatewayGolgoth13 W-CDMA AR2

(JECKEL)Mobile Terminal(Calvin)

PAIPv6 Appli Server

(Agoumi)

IF W-CDMA box

Eurecom Switch

MobyRouter + DNS IPv4 / IPV6

12

14

11

14

12 12

MURRET Access Router12

CARNEEXP

EURECOM

12

6Wind Router12

Figure 4 : Eurecom testbed (Overall Description)

HA + AAA-C+ QoS B(HECKEL)

2001:660:382:12:201:2ff:feb8:1c29/64

Radio GatewayGolgoth13

Eth0: 2001:660:382:11:206:5bff:fe88:866a/64

Eth0: 2001:660:382:1 1:201:2ff:feb8:1a76/64W-CDMA AR2

(JECKEL)Eth1: 2001:660:382:14:201:2ff:fea5:912d/64

Mobile Terminal(Calvin)

PA(Agoumi)

2001:660:382:12:201:2ff:fecf:10e9/64

IF W-CDMA box

MobyRouter + DNS IPv4 / IPV6MobyRouterEth2 : 2001:660:382:14:204:75ff:fec2:8c6d/64 on network 14Eth3 : 2001:660:382:12:204:75ff:feb0:3e68/64 on network 12

Airs BoneIPV6

2001:660:382::1

6Wind Router

2001:660:382:12::1

UMTSVLAN 12

Eth0: 2001:660:382:21:210:5aff:fea7:f1c5/64MURRET AR1

Eth1: 2001:660:382:12:204:76ff:fe96:59a3/64

Via UMTS Sw itch

Figure 5 : Eurecom testbed (IPv6 Addressing)

The testbed was used to integrate the components in several steps described below: Phase 1 : test TD-CDMA traffic alone Phase 2 : TD-CDMA + Mobility Phase 3 : TD-CDMA + AAAC Phase 4 : TD-CDMA + Mobility + AAAC Phase 5 : TD-CDMA + QoS Further integration could be completed in the Stuttgart testbed.

Page 16: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 16 / 86

2.3.2 AAA Testbed in Fokus The following configuration is used for the tests:

Figure 6. AAA Fokus testbed configuration

Notes:

? MN runs dummy_urp ? Att in AR- runs disc in client mode ? AAA home server runs disc in server mode

2.3.3 Logging & auditing testbed in ETH All hosts are located in the same subnet to minimize network delay. The clock in each host is synchorinized using NTP (Network Time Protocol).

wlan

eth AR

AAA_h server

MN

Access Router 1 LLM-1

Local Log 1 AAAC

CLM Audit Trail Access Router 2

LLM-2 Local Log 2

Figure 7. Testbed for Log Transfer

Page 17: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 17 / 86

The databases and the mySQL Server are located in the same host as the Auditor.

2.3.4 WCDMA+ DiffServ testbed in Motorola

HA

CN

AR AR

RG

MN

Figure 9. WCDMA+ DiffServ testbed in Motorola

Motorola performed the following tests:

? Marking software tested on MN and AR. ? TD-CDMA Non Access Stratum tested on MN and RG with a dummy version of TD-CDMA

Access Stratum on MN and RG. ? MTNM tested on MN with dummy version of FHO, Registration and Paging modules on MN. ? MTNM tested on MN with dummy version of Registration and Paging modules on MN, real

version of FHO on MN and AR.

AAAC

Auditor

Archives

Audit Report Audit

Trail

Figure 8. Testbed for Auditing.

Page 18: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 18 / 86

2.3.5 Overall test bed in PTIN

Figure 10. PTIN testbed

Page 19: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 19 / 86

3. Expert Tests description and methodology

3.1 Introduction In this section, a description of the expert evaluation procedures (including the trial scenario involved and so on) is provided. The description is divided in some subsections, each one according to a specific part of Moby Dick software.

3.2 Log Transfer Delay In this document the Log Transfer Delay (LTD) is defined as the average time required to transfer a single log stored in a Local Log into a central Audit Trail. The Local Log is accessed by the Local Log Management Module (LLM), while the Audit Trail is accessed by the Central Log Management Module (CLM). Within MobyDick each Access Router (AR) owns a Local Log and the AAAC Server owns the Audit Trail. Both Local Log and Audit Trail are implemented as a mySQL database.

3.2.1 Test Description The Log Transfer Delay depends surely on the network delay, database access time, hardware speed, etc., however there are other more interesting parameters, which may also have an impact. These parameters are:

nlocal logger = number of Local Log Management Modules (LLM), nlog = #Logs = number of logs in the Local Log (stored in mySQL database)

Therefore,

LTD = f(nlocal logger, nlog)

The relation of LTD to nlocal logger and nlog reflects how well the LLM and the CLM are designed and implemented.

There are three types of event logs, i.e., Entity Availability, User Registration, and Service Authorization Event Logs, which differ in the log size (size of information contained within each event log) and rate of generation. The different log sizes also contribute to different LTDs of each event type. However their differences are expected to be bound by a fix value, because each event type has typically a fix and maximum size. Investigating LTD on this issue can not show the quality of the implementation of the LLM and the CLM; it can only show how big those differences are. Therefore, only the LTD of Entity Availablitiy Event Logs is evaluated in this document. The different rates of log generation of each event type and varying rates of log generation within certain event types (e.g., user registration and service authorization events occur intermittently) cause unequal distribution of number of event logs within each fix time interval. The current implementation of LLM queries mySQL database (Local Log) periodically to obtain event logs fall within a fix time interval. In this evaluation all logs are made available in advance before being fetched by LLMs and transferred to CLM. In real practice, every LLM and the CLM run, while event logs are generated. Therefore, the number of logs obtained by the implemented mySQL query in practice will be far less than in this evaluation; in other words the evaluation is carried out for a tougher condition. The LLMs have to work harder in this “batch mode”.

3.2.2 Measurement Parameters The measurement points within the LLMs and the CLM are shown in the following time diagram.

A Message Transfer Time (MTT) is defined as the time that has elapsed between the timepoint where LLM was about to deliver the message to the underlying communication layer (TCP) and the timepoint where the message has been received and successfully disassembled by CLM.

Logs Transfer Time (LTT) is defined as the time that has elapsed between the timepoint where LLM was about to retrieve the logs and the timepoint where CLM has stored the last log. Note here that LTD is LTT divided by the number of logs.

Page 20: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 20 / 86

Figure 11. Measurement points in LLM and CLM

3.2.3 Measurement Process and Configuration Two configurations have been used for the measurement of MTT and LTT. In the first configuration only one LLM is involved, while in the second configuration both LLMs are involved.

To measure MTT, the testbed is configured as follows:

? Local Log contains 5000 Entity Availability (EA) Event Logs

? The measurement points are Message Sending Start Time in LLM and Message Received Timepoint in CLM.

Log Storing End

Log Storing Start

Log Storing Start

Log Storing End

Message Received

Message Received

LLM CLM

Retrieval of logs fall within the first n minutes. (p = #logs)

Message Assemble Start

Message Sending End

Message Sending Start

Assemble 1st message

send()

recv() and disassemble 1st message

Get AVP values and generate mySQL query

Run mySQL query

Message Assemble Start

Message Sending End

Message Sending Start

Assemble pth message

send()

Retrieval of logs fall within the next n minutes. (q = #logs)

recv() and disassemble pth message

Get AVP values and generate mySQL query

Run mySQL query

Run mySQL query to store the last log

Log Storing End

Logs Transfer Time

Message Transfer Time

Logs Retrieval Start

Page 21: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 21 / 86

? The measurements are carried out for three cases:

o Only LLM-1 (LLM in Access Router 1) is involved

o Only LLM-2 (LLM in Access Router 2) is involved

o Both LLM-1 and LLM-2 are involved

In the experiment with 2 LLMs the LLMs need to be started as simultaneous as possible.

The following table shows the configuration and the required measurements.

Event Type = EA #Logs per LLM = 5000 LLMs involved ? LLM-1 ? LLM-2 ? Both

Message No. Message Sending Start Message Received

1

2

5000 To measure LTT, the testbed is configured as follows:

? Varying amount of logs in the Local Log:

o Experiments with 1 LLM use the following amount of logs: 500, 1000, 2000, 5000, 10000, 20000, 50000

o Experiments with 2 LLMs use the following amount of logs per LLM: 500, 1000, 2500, 5000, 10000, 25000

? The measurement points are the first Logs Retrieval Start in LLM and the last Log Storing End in CLM.

LTT = Last Log Storing End - First Logs Retrieval Start

LTD = LTT / total number of logs transferred

? The measurements are carried out for three cases:

o Only LLM-1 is involved

o Only LLM-2 is involved

o Both LLM-1 and LLM-2 are involved

In experiments with 2 LLMs the LLMs need to be started as simultaneous as possible. The LTD in an experiment with 2 LLMs must be seen from the CLM’s point of view, where the total amount of logs received from all the LLMs is of interest. In this case the First Retrieval Start is the earliest of all First Retrieval Starts of the LLMs and the Last Log Storing End is the latest of all the Last Log Storing Ends.

? The same configuration are repeated two or three times.

3.3 Auditing Speed In this document the Auditing Speed S is defined as the number of logs that is processed within a unit of time. The Auditing Speed may be dependent on the following factors:

? nN = number of users or entities in the audit trail

? nlog = number of logs in the audit trail

? typelog = type of event logs

Therefore,

S = f(nN, nlog, typelog)

Page 22: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 22 / 86

3.3.1 Test Description and Measurement Parameters The measurement points within the Auditor are shown in the time diagram in Figure 12. The Audit Time as shown in Figure 12 encompasses also the time to retrieve the users or entities identity, the time to retrieve the logs from the Audit Trail, the time to store the processed logs in the Archive, and the time to delete the processed logs from the Audit Trail. Unfortunately the time spent for querying the databases (retrieval, storing, and deletion) was not measured. Current implementation of the Auditor processed the users or entities consecutively, however auditing of entity availability, user registration, and service authorization is carried out by three parallel processes.

Figure 12. Measurement points in the Auditor

The Audit Time has been evaluated with different number of users or entities, and different number of logs. Auditing Speed is number of logs divided by the Audit Time.

3.3.2 Measurement Process and Configuration To measure the Auditing Speed of entity availability events, the testbed is configured as follows:

? Varying amount of logs in the Audit Trail: 20000, 40000, and 60000 logs.

? Varying number of entities for the same total amount of logs: 6, 8, and 10 entities.

Auditing and Reporting ....

Store the processed logs into the Archive and delete those logs from the Audit Trail

Auditor

Retrieval of the first n users or entities’ identity

Auditing for user or entity 1 started

Auditing for user or entity i ...

Auditing for user or entity 1 ended

Retrieval of the logs of user or entity 1 that fall within the first interval v (p = #logs)

Retrieval of the next n users or entities’ identity

Auditing Start

Retrieval of the logs of user or entity 1 that fall within the last interval v (r = #logs)

Auditing and Reporting ....

Store the processed logs into the Archive and delete those logs from the Audit Trail

Auditing for the last user or entity ended

Retrieval of the logs of user or entity 1 that fall within the next interval v (q = #logs)

Auditing and Reporting ....

Store the processed logs into the Archive and delete those logs from the Audit Trail

Audit Time

Page 23: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 23 / 86

? The measurement points are the Audit Start Time and the Audit End Time.

To measure the Auditing Speed of user registration and service authorization events, the testbed is configured as follows:

? Varying amount of logs in the Audit Trail: 2000, 5000, 10000, and 20000 logs.

? Varying number of user identities for the same total amount of logs: 10, 20, and 30 identities.

? The measurement points are the Audit Start Time and the Audit End Time.

3.4 Charging performance

3.4.1 Test description Charging data has been generated and transferred to the charging module. The test consists of seeing how much time it takes to process the data.

3.4.2 Measurement parameters ? Measures of the charging processing time, i.e. how long does it take to calculate the charge. ? Measures of the time taken by the following processes:

o consistency check o charge calculations o movement of the accounting data to the accounting data warehouse in function of this

two parameters: ? number of sessions ? number of rows.

3.4.3 Test realization, measurement process, interactions required.

? Fill the accounting database for one user and one session with n rows of accounting data; i.e. 1 START row, 1 STOP row and (n-2) INTERIM rows. Database: Accounting; Table: accountingrecords

? Fill the flow table for each row of the above mentioned session; i.e. for each row one can have m

different flows (described by DSCP’s). The charging code supports up to 9 different flows (DSCP’s), describing the 9 different services (e.g. S1, S2, …). Database: Accounting; Table: flows

Remarks:

- First of all it makes sense to have only one user with one session and just to vary the parameters n and m. Typically n is something between 2 (START and STOP rows are mandatory) and 15.

- Concerning the parameter m: At the moment there is only one type flow, i.e. the DSCP is always equals 0, hence m = 1.

- At the second stage, one could increase the number of sessions per user. - At a later stage, one could have several users with several sessions.

Get the results from charging Statistics database.

3.5 AAA Scalability tests

3.5.1 Tests description and motivation AAA registration process and authentication mechanism offers an enhanced security model; but as always there is an trade-off between this extra feature and performance. Thus the AAA registration process should be regarded as an overhead its overall consumption of resources (bandwidth, time, CPU time) adding up and contributing to the latency of the whole system.

Page 24: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 24 / 86

A parameter that gives an idea about this newly introduced latency is the time it takes an AAA registration to complete; this should be measured for each and every available access network. In the following this AAA registration round trip times are called t_reg_{wlan,eth,wcdma}. they can be compared against the ping roundtrip time, again for each and every type of access network (rtt_{wlan,eth,wdcdma}). Now, if you suppose that t_reg_{wlan,eth,wcdma} was measured in the home network, Madrid say, you could measure the same kind of value for a roaming mn - t_reg_roaming{wlan,eth,wcdma}. again compare it against rtt. The overhead should be small when comparing to t_reg_{wlan,eth,wcdma} - is given by the AAA routing / forwarding beetween. the AAA servers in the two different domains. Yet a new kind of test would be to see how well the attendant behaves at high loads. For that you should try to see whether the t_reg_{wlan,eth,wcdma} is affected by a configuration in which you have a mobile node (or several ones) attached to the access router and trying to register in the same time and very often; for this set up a very small lifetime offered by the AAA.h server (could be even 0). You measure the same parameter – that is t_reg_{wlan,eth,wcdma} - at any of the mobile nodes. The same test as above can be performed with several access routers and with mobile nodes from the same domain, connected to them and registering with a very small lifetime. This test would give an idea about how much load an AAA.h server can take.

3.5.2 Variables to measure The following parameters are to be measured: t_reg_{wlan,eth,td-cdma} rtt_{wlan,eth,td-cdma} t_reg_roaming{wlan,eth,td-cdma} These parameters are measured in configurations involving one or more attendants and one or more mobile nodes.

3.5.3 Measurement process, test realization The tests are 2in the testbed available at the FhG FOKUS premises and at official trial sites. Those test beds are described in section .2. We have one AAA server and several AAA clients offering both wavelan, TD-CDMA and ethernet access are installed in these testbed. In these tests MNs were connected to the AR (AAA client) using either wlan access or Ethernet or TD-CDMA access. MN runs dummy_urp, AR runs disc in client mode, AAA.h runs disc in server mode. Disc client (AAAAC client) is configured to communicate with disc server (AAAAC.h=AAAAC.f). Disc server has the AAA routing defined. mn_config file does not matter (no matter of the profile of the user registering) But its size (i.e. number of users in the database) must be taken into account. For not roaming tests AAAAC.h=AAAAC.f. If roaming configure AAA routing in AAAC.h and AAAC.f Launch disc server, then disc client then AAA Register user (no matter what his profile is) using dummy_urp and test_fifo. No need of Mobile Ipv6, nor QoS system. For measuring t_reg_{wlan,eth,wcdma} the following procedure is employed; a "tcpdump" runs on the AR having a filter which looks for URP ARR and URP ARA messages on the interface on which the mobile node connects to the AR and recording the timestamps of these packets. The filter could be based on the following information: request: src addr: mn_link_local(on the iface towards the AR), dst_addr: ar_link_local, src_port: 2301, dst_port: 2300; reply - the other way around; The timestamps of a request and reply are then processed by a script which produces t_reg_{wlan,eth,wcdma} by substracting the request timestamp from the reply one.

Page 25: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 25 / 86

The rtt_{wlan,eth,wdcdma} can be used by ping6-ing the AAA.h server from the mobile node. As suggested above, another way to measure the t_reg_ (both on wlan and eth): a mn was registering and the lifetime (changing it for the user registering in mn_config file in AAAAC.h server) offered by the AAAAC.h server was 0 - thus trying to (re-)register as fast as it could on a certain period of time. The number of such registrations / sec. should give an idea about the latency of the whole process under heavy stress.

3.6 DSCP Marking Software (DMS)

3.6.1 Filter loading

3.6.1.1 Test description A basic filter is defined and loaded by the DMS. The time needed to perform this operation is measured. The goal of this test is to ensure that a filter update in the DMS is fast enough to not disturb the normal AR or MT functioning. In fact, a problem could occur if a filter is modified during a communication, especially in the AR. If the process takes too much time, some packets may not be marked correctly, and could even not be marked at all.

3.6.1.2 Measurement parameter The parameter measured during this test is the time needed to perform the entire filter loading process, i.e. the time between the detection of new data on the /dev/dscp character device and the end of the function in charge of reading this data and modifying the filters table in the kernel. The result is in microseconds.

3.6.1.3 Test realization

A file named ‘rules_MN_WWW’ is edited and contains the following filter definition:

Filter 1 Description TCPFilter TCPDestPort 80 TCPCtrlSyn 1 CoS S2

Then, the contents of this file are written on the /dev/dscp character device with the following command line: # cat rules_MN_WWW > /dev/dscp Finally, the result is read in the /var/log/message file, because the module in charge of the /dev/dscp device, named ‘dscpdev.o’, prints all the results in this file. The command line is: # cat /var/log/message

3.6.2 DSCP dumping from AAA

3.6.2.1 Test description

The time needed to change the DSCP associated with a marking type maybe critical. When flows are running, and their DSCP must change because the MT enters a new domain, if the process takes too much time then the packets may use a marking type that is not valid in the new access network. So, the test shows this time during the user registration phase. The MT AAA-registration module passes seven DSCP codes to the DMS, one by one.

3.6.2.2 Measurement parameter

The parameter measured during this test is the time needed to perform the each marking type update. The result is in microseconds.

Page 26: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 26 / 86

3.6.2.3 Test realization

The MT AAA-registration module writes the following line one by one on the /dev/dscp character device: SIG 12 S1 12 S2 12 S3 12 S4 0 S5 12 S6 12

Then, the results are read in the /var/log/message file.

3.6.3 DMS and IPsec

3.6.3.1 Test description

In the very first tests of the DMS, the marking of the IPsec packets could not be tested. This test has been done after the second release of the software. The problem was only to ensure that a packet was correctly marked, despite its modification by IPsec. In fact, we just verified that the call of the marking function was done before the call of the IPsec code, in order to ensure the DMS could not try to modify the DSCP value of a packet already processed by IPsec. The test simply consists in marking a packet before the call to the IPsec code, and verifying the packet stays correctly marked after this call.

3.6.3.2 Test realization

To perform this test, we just chose a packet that we were sure IPsec would treat it. Then, we defined a filter to mark this packet with a specific DSCP value (10 in hexadecimal), and verified, with Ethereal, that the packet was transmitted on the output interface with the correct DSCP value.

3.7 QoS entities communication delays

3.7.1 Test description and purpose: With this test we pretend to measure communication delays between the different QoS entities: QoSB, QoSM, RG, and A4C server QoSB communication. In order to achieve this purpose, delays between client to server message and respective response where measured.

3.7.2 Variables to measure Measurement parameters: For QoSB-QoSM communication:

Message Client-Open ? Client-Accept Configuration Request ? Configuration Decision Config Request ? Config Decision (Deny) Config Request ? Config Decision (Core Network Accept) Config Request ? Config Decision (Access Network Accept) Keep-Alive ? Keep Alive FHO Report ? FHO Decision

For A4C.f server QoSB communication:

Message Client-Open ? Client-Accept QoS Profile Definitions NVUP Authorization

For QoSB-RG:

Message

Page 27: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 27 / 86

Message to RG Response to AR

3.7.3 Measurement process, test realization Interactions required: In order to measure the response times of the QoS Broker to the AR solicitations, several measures where made in PTIN test-bed described in section 2.3.5 running the QoS Broker on Kyle and QoSManager on Kenny and Cartma. The tool used in these tests was ethereal, by placing it in the core network we where able to calculate times need by the QoS to process responses to the several clients. For some other values (cases where no response message is issued), profiling tools where used such as gprof (GNU Profiler). The profiling tools able us to measure time need by the processing functions in the QoS Broker to process QoS Profile Definition and a NVUP Authorization issued by the A4C Server. It also help us to track time need to send messages to the RG.

3.8 QoS context installation time in the QoSM in the nAR

3.8.1 Tests purpose and description We want to measure how long does it takes in the QoSM of the nAR during a FHO to install the filters (QoS context) a user (CoA) had installed in the oAR (of course changing the CoA of the filters to the nCoA, operation performed by the QoSB) A MN registers and sends some traffic inside his profile in order to install some filters (QoS context) in the AR it is attached to. The user does a FHO. The QoS system transfers the QoS context from the oAR to the nAR (changing the CoA to the nCoA). We want to measure how long does it take in the nAR to install this context. The total QoS FHO time is the QoS context time measured in this test plus the time taken to transfer the QoS profile from oAR to nAR via QoSB. This delay is measured in section 3.7 under the term FHO

3.8.2 Variables to measure To measure how long does it take in the AR to install the QoS context we measure the time elapsed since the QoSM receives the QoS context from the QoSB (DEC COPS message) till the QoSM answers the QoSB it has installed all the filters (RPT COPS message).

3.8.3 Measurement process, test realization To have more accurate results, we do the test twice. The first time we install and transfer several filters and the second time we install and transfer less filters. We employ Madrid test bed. We run AAA, QoS and FHO. No paging, charging metering and auditing needed. No DiffServ marking soft is needed. Ping6 is employed to generate DSCP marked traffic. MN is attached and AAA registered in cucaracha and installs there a filter. Then it moves to termita. There it installs 5 new filters. For this it is enough to ping (with authorized DSCPs) to several destinations and with several DSCPs. The first echo requests triggers the QoS authorization process and the QoSM installs the filter (since the user is registered). Then another FHO is done. We measure for this second FHO the time elapsed between the DEC message and the RPT message (as described in 3.8.2) using ethereal. This time corresponds to the time needed to install the 5 new filters, and to do the necessary processing including checking which from the filters transferred are already installed (reason why we install initially one filter in cucaracha). We restart the software (the QoSMs) and repeat the test. This time in termita we install only one new filter.

3.9 FHO

3.9.1 Test description For Fast Handover, measurements should be done to compare the following:

1) Fast Handover with dummy QoS and dummy AAA module 2) Fast Handover with real QoS and real AAA module

Page 28: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 28 / 86

For those configurations, the following measurements should be made.

- Overall latency during handover (no. of ms) - data loss during handover (no. of packets)

Also, these measures should be obtained (not for all possible configurations):

- specific delay of QoS manager - QoS broker communication (ms) The measurements should be done for both intra-technology (WLAN-WLAN) and inter-technology (Ethernet-WLAN, WLAN-TDCDMA, Ethernet-TDCDMA handovers). In order to check the bicasting process, before testing FHO latencies in both tests involving only one trial site and test involving the two trial sites, a test scenario was deployed in Madrid. In this scenario, a delay was added – by means of the NistNET emulator – between the MN and a CN. This way, we can check if bicasting is working properly, because we delay the reception of the BU sent by the MN. The scenario is shown in the next figure:

Figure 13. NistNET trial scenario

NistNET tool emulate different network conditions in IPv4 connections, so in order to be used in our tests, an IPv6-in-IPv4 tunnel was set up, between a CN (escorpion) and a router (viudanegra). Routing was also changed in order to use viudanegra as default router of cucaracha and termita. Therefore, every packet between CN and MN passed through viudanegra (and also through the established tunnel). NistNET was installed in pulga.

3.9.2 Measurement parameters To measure the overall latency during handover we measure the delay between messages 2 Router Solicitation for Proxy (sent from MN to oAR) and message 9 Handover Execute ACK (from oAR to MN). To measure the specific delay of QoS manager - QoS broker communication (ms) we measure the delay between messages A (from oAR to QoSB) and C (from QoSB to nAR). This delay is also measured in section 4.7, under the term FHO. Note that the time needed to install the qos filters must be added to this delay. See section 3.8 for more details. See [1], signalling flow section for detail of these messages.

Page 29: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 29 / 86

3.10 Paging

3.10.1 Test description Important performance characteristic of a paging system is the signalling costs introduced as well as saved with a network supporting paging, compared to signalling costs of a system without dormant mode and paging support. The signalling costs characteristics of the proposed paging architecture and protocol has been evaluated analytically, taking various network conditions into account. A further important characteristic, which is to be measured and evaluated in this test, is the additional delay in routing initial user-data packets, which is introduced by the paging system. Additional delay is introduced due to routing a paging trigger packet (initial user-data packet) through a Paging Agent instead of routing it directly towards the addressed mobile terminal. The latter difference in routing delay is negligible (with the given network topology, having the Paging Agent in the mobile node’s visited domain) compared to the delay introduced when buffering the data-packet at the Paging Agent until the mobile terminal has been re-activated and associated routing states have been re-established under control of the Paging Agent. Delay characteristics at various interfaces/nodes are to be determined by means of the tests described in the subsequent sub-sections.

3.10.2 Measurement parameters The following paragraphs describe the measurements to be performed at a particular network interface or component:

1) Measure the overall paging delay when a CN sends a packet to a dormant MN, i.e. measure the time elapsed (ms) until echo reply (assumes the Ping application to be used) arrives at the CN. This time delay includes buffering of initial data packet at the PA, the paging process, re-registration of the MN with the network, re-establishment of routing info, forwarding of buffered packets and finally sending the reply from the MN. Actually, this value covers not only additional paging delay, but also the total response time. To compare, the reply time of an "active" MN can be measured (but this implies a different path!). Measure whether or not any packets are lost.

2) Measure individual delays (all in ms)

a) delay between arrival of initial packet at PA to forwarding of first buffered

packet b) delay between MN receiving paging request and sending of MIPv6 Binding

Update and de-registration (dormant request with lifetime 0). c) individual direct delays for comparison (HA ? PA, PA ? MN, MN ? HA,

MN ? CN) The intention of the measurement described above is to be able to check the difference between network delays (c) and the delays specifically caused by the paging process (a, b). Repeat the measurements described above for the following scenarios and investigate differences in delay characteristics:

- Enter dormant mode attached to one access technology, initiate re-activation via CN while attached to same technology.

- Enter dormant mode attached to one access technology, initiate re-activation via CN while attached (moved) to a different access technology.

- Perform measurements for different network load conditions (optional). The first two bullet points should be done with some, but not necessarily all combinations of technologies.

3.10.3 Test realization These tests were performed both in Madrid and in Stuttgart. But the procedure was different. In Madrid, when the MN awakes, there is no QoS context for this MN (which is normal situation). Thus the QoS

Page 30: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 30 / 86

context (QoS negotiation) must be done for each new flow (flow is unique combination of Source address, destination address and DSCP). In Stuttgart, artificially this QoS context is established before the MN awakes. This allows seeing the effect of QoS negotiation while awaking.

3.11 Inter-domain Handover

3.11.1 Test description For Inter-domain Handover, measurements should be done to compare Inter-domain Handover and Intra-domain Handover (Fast Handover). The real QoS and AAA modules should be used in the measurement. For this configuration the data loss during the handover should be measured. The measurement should be done only for handover between two WLAN access routers.

3.11.2 Measurement parameters To measure the Packet loss a ping is done from the MN to CN.

3.11.3 Measurement process After registration MN should handover first from ksat48 to ksat49 and then to ksat81, during the handover a ping is done on the MN.

4. Expert tests realization, results, analysis and assessment

4.1 Introduction In this section we present some results obtained from the realisation of the test described in the previous section.

4.2 Log Transfer Delay

4.2.1 Message Transfer Time Figure 14 depicts the Message Transfer Time as a function of Sending Start Time in the case where only LLM-1 was sending the 5000 logs. The number of long lines in the figure corresponds to the number of retrieval of the whole 5000 logs. There were 26 retrievals with nearly 200 logs obtained in each retrieval except in the last retrieval. This was due to the fact that two entities (AAAC Client and QoS Manager) generated Availability Event Logs approximately every five minutes and the query used in mySQL statement retrieved logs that fell within an interval of 500 minutes.

Each line shows that MTT was increasing as long as the LLM kept sending the logs. In the beginning, after each log retrieval phase, the first MTT dropped again, but its value was greater than the value of the previous phase, because during the short logs retrieval time TCP was unable to empty its buffer. This situation repeated during approximately 1.5 seconds before it achieved a “stable” state, where probably the TCP buffer was full all the time.

Figure 15 shows a similar behavior when using LLM-2 (the same LLM, but in Access Router 2), except that the “stable” state has not been achieved until the end of the transfer of 5000 logs. This is reasonable when looking at the value of each MTT, which is lower than the previous case.

Figure 16 and Figure 17 present the MTT of messages from LLM-1 and LLM-2 in the case where both LLMs were involved. After one second of log transfer the MTTs of both LLMs lay in a range between 0.6 and 0.9 second. Obviously the MTT was larger than the first two cases and both LLMs needed nearly 4 to 4.5 seconds to transfer their 5000 logs. This was double so long compared to the first two cases, but the total number of logs was also twice as much. It is worth to note here that although from an LLM’s viewpoint the MTT and duration of logs transfer (LTT) is larger, but from a CLM’s point of view the LTT is not worse.

Page 31: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 31 / 86

Figure 14. Transfer Time of messages from LLM-1 where only LLM-1 is involved.

Figure 15. Transfer Time of messages from LLM-2 where only LLM-2 is involved.

Page 32: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 32 / 86

Figure 16. MTT of messages from LLM-1 where 2 LLMs are involved.

Figure 17. MTT of messages from LLM-2 where 2 LLMs are involved.

Page 33: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 33 / 86

The sections of the diagram in Figure 16 and Figure 17 between second 2.32 and 2.38 are shown together in Figure 18. As long as the LLMs kept transferring the logs, the MTT of both LLMs would continuously increase.

Figure 18. MTT of messages from LLM-1 and LLM-2 between second 2.32 and 2.38.

Although it is interesting to know how the MTT behaves in each of the cases, the evaluation of the LTT gives more information on the implementation performance of the LLM and the CLM.

4.2.2 Logs Transfer Time The following tables show the measurement results. The time value is the number of seconds since the Epoch (00:00:00 UTC, January 1, 1970).

Table 1. LTD where only LLM-1 is involved.

Involved LLMs = 1 (LLM-1) Viewpoint: LLM-1

#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]

500 1069724251.002910 1069724251.254946 0.252036 0.000504

500 1069724369.002333 1069724369.254170 0.251837 0.000504

500 1069724399.003081 1069724399.256558 0.253477 0.000507

1000 1069724671.003607 1069724671.495345 0.491738 0.000492

1000 1069724699.004695 1069724699.496490 0.491795 0.000492

1000 1069724739.001786 1069724739.494080 0.492294 0.000492

2000 1069724985.002864 1069724985.973565 0.970701 0.000485

2000 1069725015.003587 1069725015.972682 0.969095 0.000485

2000 1069725049.003649 1069725049.974169 0.970520 0.000485

5000 1069725307.002649 1069725309.410903 2.408254 0.000482

5000 1069725337.003402 1069725339.418564 2.415162 0.000483

Page 34: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 34 / 86

5000 1069725367.002194 1069725369.409739 2.407545 0.000482

10000 1069725595.002406 1069725599.834603 4.832197 0.000483

10000 1069725625.005099 1069725629.833880 4.828781 0.000483

10000 1069725671.003129 1069725675.784336 4.781207 0.000478

20000 1069725899.005288 1069725908.714195 9.708907 0.000485

20000 1069725941.003992 1069725950.722350 9.718358 0.000486

20000 1069725979.003366 1069725988.679019 9.675653 0.000484

50000 1069726325.003003 1069726349.246906 24.243903 0.000485

50000 1069726381.003231 1069726405.191189 24.187958 0.000484

50000 1069726435.003800 1069726459.383961 24.380161 0.000488

LTDavg = 0.000488

Table 2. LTD where only LLM-2 is involved.

Involved LLMs = 1 (LLM-2) Viewpoint: LLM-2

#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]

500 1069723915.020366 1069723915.276614 0.256248 0.000512

500 1069724059.023657 1069724059.278982 0.255325 0.000511

500 1069724087.014298 1069724087.271644 0.257346 0.000515

1000 1069724505.013870 1069724505.514406 0.500536 0.000501

1000 1069724553.024957 1069724553.522220 0.497263 0.000497

1000 1069724589.015777 1069724589.513375 0.497598 0.000498

2000 1069724847.021682 1069724847.998343 0.976661 0.000488

2000 1069724881.022500 1069724882.141814 1.119314 0.000560

2000 1069724913.013188 1069724913.992419 0.979231 0.000490

5000 1069725159.018811 1069725161.435491 2.416680 0.000483

5000 1069725191.019543 1069725193.444896 2.425353 0.000485

5000 1069725231.020453 1069725233.456355 2.435902 0.000487

10000 1069725453.025528 1069725457.888324 4.862796 0.000486

10000 1069725487.016313 1069725491.867644 4.851331 0.000485

10000 1069725531.017309 1069725535.859280 4.841971 0.000484

20000 1069725757.012476 1069725766.710607 9.698131 0.000485

20000 1069725797.023396 1069725806.729352 9.705956 0.000485

20000 1069725835.014261 1069725844.861650 9.847389 0.000492

50000 1069726059.019377 1069726083.669442 24.650065 0.000493

50000 1069726115.020657 1069726139.276752 24.256095 0.000485

50000 1069726177.022080 1069726201.261439 24.239359 0.000485

LTDavg = 0.000496

Table 3. LLM-1’s view of LTD where both LLM-1 and LLM-2 are involved.

Involved LLMs = 2 (LLM-1 and LLM-2) Viewpoint: LLM-1

Page 35: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 35 / 86

#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]

500 1069747815.003111 1069747815.457552 0.454441 0.000909

500 1069748047.004514 1069748047.464777 0.460263 0.000921

1000 1069748679.004019 1069748679.852027 0.848008 0.000848

1000 1069748789.002811 1069748789.809630 0.806819 0.000807

1000 1069748851.003950 1069748851.956201 0.952251 0.000952

2500 1069748993.003134 1069748995.277950 2.274816 0.000910

2500 1069749203.004370 1069749205.278851 2.274481 0.000910

2500 1069749253.003639 1069749255.295361 2.291722 0.000917

5000 1069749407.002732 1069749411.727801 4.725069 0.000945

5000 1069749471.003514 1069749473.825093 2.821579 0.000564

5000 1069749535.004308 1069749539.786188 4.781880 0.000956

10000 1069749641.003784 1069749650.585746 9.581962 0.000958

10000 1069749705.002622 1069749714.517247 9.514625 0.000951

10000 1069749763.002488 1069749772.557991 9.555503 0.000956

25000 1069750189.003941 1069750213.027421 24.023480 0.000961

25000 1069750265.002701 1069750289.148210 24.145509 0.000966

25000 1069750333.002784 1069750356.997857 23.995073 0.000960

LTDavg = 0.000905

Table 4. LLM-2’s view of LTD where both LLM-1 and LLM-2 are involved.

Involved LLMs = 2 (LLM-1 and LLM-2) Viewpoint: LLM-2

#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]

500 1069747815.017171 1069747815.498640 0.481469 0.000963

500 1069748047.022479 1069748047.488888 0.466409 0.000933

1000 1069748679.016871 1069748679.955910 0.939039 0.000939

1000 1069748789.019379 1069748789.981726 0.962347 0.000962

1000 1069748851.020791 1069748851.953104 0.932313 0.000932

2500 1069748993.024030 1069748995.353110 2.329080 0.000932

2500 1069749203.018818 1069749205.411743 2.392925 0.000957

2500 1069749253.019957 1069749255.347914 2.327957 0.000931

5000 1069749407.023449 1069749411.765081 4.741632 0.000948

5000 1069749473.024959 1069749475.803932 2.778973 0.000556

5000 1069749535.016377 1069749539.441761 4.425384 0.000885

10000 1069749641.018774 1069749650.612392 9.593618 0.000959

10000 1069749705.020243 1069749714.587306 9.567063 0.000957

10000 1069749763.021558 1069749772.362947 9.341389 0.000934

25000 1069750189.021248 1069750212.727297 23.706049 0.000948

25000 1069750265.022978 1069750288.907783 23.884805 0.000955

25000 1069750333.024516 1069750356.914274 23.889758 0.000956

Page 36: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 36 / 86

LTDavg = 0.000920

Table 5. CLM's view of LTD where both LLM-1 and LLM-2 are involved.

Involved LLMs = 2 (LLM-1 and LLM-2) Viewpoint: CLM

#Logs Retrieval Start [sec] Last Log Stored [sec] LTT [sec] LTD [sec]

1000 1069747815.003111 1069747815.498640 0.495529 0.000496

1000 1069748047.004514 1069748047.488888 0.484374 0.000484

2000 1069748679.004019 1069748679.955910 0.951891 0.000476

2000 1069748789.002811 1069748789.981726 0.978915 0.000489

2000 1069748851.003950 1069748851.956201 0.952251 0.000476

5000 1069748993.003134 1069748995.353110 2.349976 0.000470

5000 1069749203.004370 1069749205.411743 2.407373 0.000481

5000 1069749253.003639 1069749255.347914 2.344275 0.000469

10000 1069749407.002732 1069749411.765081 4.762349 0.000476

10000 1069749471.003514 1069749475.803932 4.800418 0.000480

10000 1069749535.004308 1069749539.786188 4.781880 0.000478

20000 1069749641.003784 1069749650.612392 9.608608 0.000480

20000 1069749705.002622 1069749714.587306 9.584684 0.000479

20000 1069749763.002488 1069749772.557991 9.555503 0.000478

50000 1069750189.003941 1069750213.027421 24.023480 0.000480

50000 1069750265.002701 1069750289.148210 24.145509 0.000483

50000 1069750333.002784 1069750356.997857 23.995073 0.000480

LTDavg = 0.000480

Note that in the abovementioned CLM’s view, the number of logs is the sum of logs sent by LLM-1 and LLM-2.

The following two figures present and compare the LTDs from the tables above. The LTDs are shown as a function of the number of logs.

Figure 19 compares the LTDs obtained from the experiments with 1 LLM (1-LLM-System) and the experiments with 2 LLMs (2-LLMs-System) for the same number of logs. The figure shows that the implemented LLM and CLM can transfer the same amount of logs a bit faster in a 2-LLMs-System than in a 1-LLM-System. This is reasonable because both transfers in a 2-LLMs-System (from LLM-1 and LLM-2) were running in parallel and the CLM was able to receive both data stream simultaneously. The figure also shows that there is only a small deviation of LTD for different number of logs ranging from 1000 to 50’000 logs.

Page 37: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 37 / 86

Figure 19. Comparison of LTDs between 1-LLM-System and 2-LLMs-System.

Figure 20 depicts the LTD in a 2-LLMs-System from different viewpoints, i.e., from the viewpoint of each LLMs and the CLM. Each LLM experienced a larger LTD in a 2-LLMs-System than in a 1-LLM-System (cf. Figure 19), but if this is seen as a whole, i.e., from the CLM’s viewpoint, the resulted LTD is better.

Page 38: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 38 / 86

Figure 20. LTD in a 2-LLMs-System from the viewpoint of the LLMs and the CLM.

4.2.3 Conclusions The evaluation of the measurement results has shown that the implemented LLM and CLM are scalable with respect to number of logs and number of LLMs (number of Access Routers).

4.3 Auditing Speed In this section the results of the measurements are presented. Each subsection deals with one of the three auditing tasks, i.e., auditing of entity availability, user registration, and service authorization events.

4.3.1 Entity Availability The measurement results are presented in the following table.

Table 6. Entity Availability Auditing Speed for different number of logs and entities.

#Logs #Entities Audit Start [sec]

Audit End [sec]

Audit Time [sec]

Auditing Speed [logs/sec]

20000 6 1070327137 1070327143 6 3333.33

20000 6 1070328061 1070328067 6 3333.33

40000 6 1070327014 1070327030 16 2500.00

40000 6 1070327935 1070327951 16 2500.00

60000 6 1070326860 1070326891 31 1935.48

60000 6 1070327748 1070327779 31 1935.48

20000 8 1070327083 1070327089 6 3333.33

Page 39: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 39 / 86

20000 8 1070328023 1070328029 6 3333.33

40000 8 1070326950 1070326966 16 2500.00

40000 8 1070327883 1070327899 16 2500.00

60000 8 1070326772 1070326803 31 1935.48

60000 8 1070327678 1070327709 31 1935.48

20000 10 1070325514 1070325520 6 3333.33

20000 10 1070327984 1070327990 6 3333.33

40000 10 1070325647 1070325663 16 2500.00

40000 10 1070327829 1070327846 17 2352.94

60000 10 1070325292 1070325321 29 2068.97

60000 10 1070327548 1070327577 29 2068.97

Figure 21 depicts the Audit Time for different number of entities and logs. The diagram shows an increase in the Audit Time the greater the amount of logs, which is reasonable, but also shows an unexpected decrease in the auditing speed, which is clearly identifiable in Figure 22. The deceleration seems however to be getting smaller the greater the amount of logs is. On the other hand it seems that the auditing speed does not depend on the number of entities within the audit trail, which reflects the fact that the logs of the entities are not processed concurrently.

Figure 21. Entity Availability Audit Time for different number of logs and entities.

Page 40: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 40 / 86

Figure 22. Entity Availability Auditing Speed for different number of logs and entities.

4.3.2 User Registration The measurement results are presented in the following table.

Page 41: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 41 / 86

#Logs #Users Audit Start [sec]

Audit End [sec]

Audit Time [sec]

Auditing Speed [logs/sec]

2000 10 1070317521 1070317529 8 250.00 2000 10 1070317614 1070317622 8 250.00 2000 10 1070317657 1070317664 7 285.71 5000 10 1070316893 1070316932 39 128.21 5000 10 1070316998 1070317038 40 125.00 5000 10 1070317083 1070317122 39 128.21

10000 10 1070314765 1070314901 136 73.53 10000 10 1070314975 1070315108 133 75.19 10000 10 1070315161 1070315295 134 74.63 20000 10 1070310191 1070310682 491 40.73 20000 10 1070310826 1070311320 494 40.49 20000 10 1070312139 1070312632 493 40.57

2000 20 1070317385 1070317394 9 222.22 2000 20 1070317429 1070317438 9 222.22 2000 20 1070317464 1070317472 8 250.00 5000 20 1070316493 1070316535 42 119.05 5000 20 1070316682 1070316723 41 121.95 5000 20 1070316810 1070316851 41 121.95

10000 20 1070314021 1070314164 143 69.93 10000 20 1070314302 1070314444 142 70.42 10000 20 1070314502 1070314644 142 70.42 20000 20 1070308917 1070309428 511 39.14 20000 20 1070309606 1070310113 507 39.45 20000 20 1070311561 1070312069 508 39.37

2000 30 1070317222 1070317230 8 250.00 2000 30 1070317278 1070317286 8 250.00 2000 30 1070317316 1070317326 10 200.00 5000 30 1070316186 1070316228 42 119.05 5000 30 1070316290 1070316332 42 119.05 5000 30 1070316367 1070316411 44 113.64

10000 30 1070312885 1070313023 138 72.46 10000 30 1070313218 1070313361 143 69.93 10000 30 1070313483 1070313627 144 69.44 20000 30 1070306793 1070307316 523 38.24 20000 30 1070307576 1070308098 522 38.31 20000 30 1070308168 1070308683 515 38.83

While Figure 23 shows the Audit Time, Figure 24 shows the Auditing Speed of user registration events. Again, there is a decrease in the Auditing Speed, but now the asymptotical limit is more visible.

Compared to the Auditing Speed of entity availability events, the Auditing Speed of user registration events is much lower, which can be justified by the more complex violation criteria in the auditing of user registration events.

Page 42: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 42 / 86

Figure 23. User Registration Audit Time for different number of logs and users.

Figure 24. User Registration Auditing Speed for different number of logs and users.

Page 43: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 43 / 86

4.3.3 Service Authorization The following tables show the measurement results.

#Logs #Users Audit Start [sec]

Audit End [sec]

Audit Time [sec]

Auditing Speed [logs/sec]

2000 10 1070323429 1070323436 7 285.71 5000 10 1070323069 1070323105 36 138.89

10000 10 1070322566 1070322688 122 81.97 20000 10 1070320978 1070321459 481 41.58

2000 20 1070323326 1070323334 8 250.00 5000 20 1070322949 1070322987 38 131.58

10000 20 1070322279 1070322421 142 70.42 20000 20 1070320358 1070320902 544 36.76

2000 30 1070323218 1070323225 7 285.71 5000 30 1070322821 1070322860 39 128.21

10000 30 1070321968 1070322099 131 76.34 20000 30 1070317837 1070318340 503 39.76

The measurement results are shown in the following figures.

Figure 25. Service Authorization Audit Time for different number of logs and users.

Page 44: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 44 / 86

Figure 26. Service Authorization Auditing Speed for different number of logs and users.

The violation criteria in the auditing of service authorization events is similar to the criteria for user registration events. Therefore, the auditing speeds in both cases are nearly the same.

4.3.4 Conclusions The larger the amount of logs to be processed the smaller is the Auditing Speed. This is an undesirable behavior, although an asymptotic limit of this deceleration seems to exist. In this regard, the implementation of the auditor must be improved. Database queries may need to be made more optimal. Since an SLA violation detection deals a lot with the processing of time values, the time representation needs to be reconsidered. Current mySQL tables use the field DATETIME to represent the time, and this is represented as string in the C++ code within the auditor. A better solution is to store time values as integer values.

4.4 AAA Scalability tests

4.4.1 AAA Scalability tests at FhG FOKUS The results obtained in the FOKUS FhG test bed are as follows: For wlan:

? rtt_wlan = 4.932 ms ? t_reg_wlan = 140 ms

For Ethernet:

? rtt_eth = 0.946 ms ? t_reg_eth = 80 ms

In Madrid testbed the results were (for wlan)

? t_reg_wlan= 195 ms (delay between messages 3 and 9 in Figure 27)

Page 45: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 45 / 86

Figure 27 AAA registration time (URP in MN)

The data obtained in Stuttgart for the same kind of measurements: For wlan:

? rtt_wlan = 2.870 ms ? t_reg_wlan = ms

For eth when registering to the local AAA server.

? rtt_eth = 0.502 ms ? t_reg_eth = ms

For tdcdma:

? rtt_tdcdma = 133 ms ? t_reg_tdcdma = 881.25 ms

For eth when registering a roaming user to the Madrid AAA server

? rtt_eth = 210.235 ms ? t_reg_eth = 316 ms

Registering a mobile with 0 seconds AAA session lifetime should give an idea about the latency of the whole process when the attendants are under heavy (maximum) load. The following results were obtained in the FOKUS FhG testbed:

4.50 < max_no_registrations / sec < 5.00

That would give no more than 1 reg. at approx. 200 ms.

4.4.1.1 Scalability tests done in Madrid and Stuttgart

1 attendant, several MNs registering at this attendant. All users register with session lifetime=0 thus forcing registration as soon at it finishes to register.

Nº of MN Time to register (ms) for MN whose user is [email protected]

1 29 reg in 20,29 s = 0,6 s/reg 3 20 reg in 28,16 s = 1,4 s/reg

Several attendants, 1MN registering at each attendant. All users register with session lifetime=0 thus forcing registration as soon at it finishes to register.

Nº of Attendants Time to register (ms) for MN whose user is [email protected]

1 16 reg in 8,4 s = 0,525 s/reg (this test is exactly the same like the first in the previous table)

2 50 reg in 26,5 s = 0,53 3 25 reg in 13 s = 0,52 s reg

Page 46: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 46 / 86

4.4.2 Analysis of results Several other tests (see next paragraph) led to the following conclusions:

? Most of the time is spent at MN and AR computing the DH keys necessary for a DH key exchange.

? Some CPU cycles are also spent at AAA_h for computing the HMAC of the packet. ? The AAA routing does not introduce too much extra overhead (when compared to the one

resulting from the registration processing delays at the attendant and the AAA server) The time in the AAA.h to process the registration request was measured in d0503. We measure it again, having 10 users in Madrid's database (mn_config). It is about 1 ms. So it is negligible with respect to the total registration time. In Figure 28 it's time elapsed between message 24 and 26. Messages 29 and 30 are the initiation of the corresponding accounting session.

Figure 28 Registration processing time in AAA.h and accounting session start

4.5 Charging performance

4.5.1 Results Presentation of Results:

- Typically one would use a two dimensional graphical representation: E.g. fix m to certain value, and vary n (X-axis) and calculate the charging time (Y-axis).

For the first test, the x axis is the multiplication of the two variables that affect charging time: nº of sessions and nº of rows Results are: (times in ms) mysql> select * from stat; +----+-------------+----------+--------+------------+------------+--------+ | id | consistency | charging | moving | processing | nrsessions | nrrows | +----+-------------+----------+--------+------------+------------+--------+ | 4 | 913 | 13618 | 29660 | 1 | 53 | 7224 | | 3 | 12 | 85 | 137 | 0 | 2 | 17 | | 5 | 8 | 88 | 166 | 0 | 1 | 50 | | 6 | 40 | 1258 | 1933 | 0 | 7 | 486 | | 7 | 27 | 574 | 1447 | 0 | 3 | 188 | | 8 | 30 | 1683 | 1574 | 1 | 5 | 352 | +----+-------------+----------+--------+------------+------------+--------+ 6 rows in set (0.00 sec) We represent id 5,6,7,8:

Page 47: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 47 / 86

Charging processes time

0

500

1000

1500

2000

0 500 1000 1500 2000 2500 3000 3500 4000

nºsessions*nºrows

tim

e in

ms

consistency*10 charging moving

Figure 29. Charging performance graph

4.5.2 Analysis of results Charging is a resource consuming process. It should be done in a separate machine than the A4C.h server. The A4C.h entity would be the same though. The idea is to put the AAA.h in a machine and the charging process in another. Both communicate because they access the same database. MySQL is used now. MySQL allows transparently to access to remote databases, so the above scenario could be very easily done.

4.6 DSCP Marking Software (DMS)

4.6.1 Filter loading The result read on the /var/log/message is the following: Jun 12 11:24:30 larva10 kernel: dscp: open Jun 12 11:24:30 larva10 kernel: dscp: write Jun 12 11:24:30 larva10 kernel: +Filter 1 erased Jun 12 11:24:30 larva10 kernel: --New filter description: TCPFilter Jun 12 11:24:30 larva10 kernel: --New field list Jun 12 11:24:30 larva10 kernel: ...Field name: TCPDestPort Jun 12 11:24:30 larva10 kernel: ...Value: 80 Jun 12 11:24:30 larva10 kernel: --New field Jun 12 11:24:30 larva10 kernel: ...Field name: TCPCtrlSyn Jun 12 11:24:30 larva10 kernel: ...Value: 1 Jun 12 11:24:30 larva10 kernel: --New filter DSCP: 0xC Jun 12 11:24:30 larva10 kernel: +1 Filters loaded, update done in 40 usec Jun 12 11:24:30 larva10 kernel: dscp: release The time needed to perform the entire filter loading process, i.e. interpreting and controlling the data read on the /dev/dscp character device, removing the old filter, and creating and recording the new one, is only 40 microseconds. The repetition of this kind of test shows that the time needed to perform this operation is stable: the result is always in the range 36 to 40 microseconds, on a MT. Note that the Filter 1 was already defined before the test. So, the old Filter 1 has been erased first. Then, a new filter has been created to replace the old Filter 1. This is the worst-case scenario in terms of time needed to load a filter. Loading a filter in the kernel filters table at an empty place takes, of course, less time.

4.6.2 DSCP dumping from AAA The result read on the /var/log/message is the following: Jun 12 11:24:29 larva10 kernel: dscp: open Jun 12 11:24:29 larva10 kernel: dscp: write Jun 12 11:24:29 larva10 kernel: Service 0 = Name: SIG - DSCP: 0x C Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 14 usec

Page 48: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 48 / 86

Jun 12 11:24:29 larva10 kernel: dscp: write Jun 12 11:24:29 larva10 kernel: Service 1 = Name: S1 - DSCP: 0x C Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 7 usec Jun 12 11:24:29 larva10 kernel: dscp: write Jun 12 11:24:29 larva10 kernel: Service 2 = Name: S2 - DSCP: 0x C Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 6 usec Jun 12 11:24:29 larva10 kernel: dscp: write Jun 12 11:24:29 larva10 kernel: Service 3 = Name: S3 - DSCP: 0x C Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 6 usec Jun 12 11:24:29 larva10 kernel: dscp: write Jun 12 11:24:29 larva10 kernel: Service 4 = Name: S4 - DSCP: 0x 0 Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 5 usec Jun 12 11:24:29 larva10 kernel: dscp: write Jun 12 11:24:29 larva10 kernel: Service 5 = Name: S5 - DSCP: 0x C Jun 12 11:24:29 larva10 kernel: +1 Filters loaded, update done in 4 usec Jun 12 11:24:30 larva10 kernel: dscp: write Jun 12 11:24:30 larva10 kernel: Service 6 = Name: S6 - DSCP: 0x C Jun 12 11:24:30 larva10 kernel: +1 Filters loaded, update done in 3 usec Jun 12 11:24:30 larva10 kernel: dscp: release The result is always in the range of 3 to 7 microseconds, excepted for the SIG marking type, which is treated as particular case in the DMS. However, because the time is expressed in microseconds, we can consider that this difference is negligible.

4.6.3 DMS and IPsec

Figure 30. DMS and IPsec detail

As we can see on the figure, the outer packet is correctly marked, and its DSCP is the same than the inner packet DSCP (10 in hexadecimal).

Page 49: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 49 / 86

4.7 QoS entities communication delays

4.7.1 Results QoSB-QoSM

Message Response time (ms) Client-Open 0,08 Configuration Request 54 Access Request Denial 0,7 Accept Core Request 1,5 Accept Access Network Request 4 Keep-Alive 0,14 FHO 18

Table 7 - Response times of the QoS Broker to QoSManager

QoSB-4AC.f server

Message Response time (ms) Client-Open 0,08 QoS Profile Definitions 384 NVUP Authorization 4

Table 8 - Response times of the QoSB to the 4AC.f server

QoSB-RG

Message Responde time (ms) Message to RG 1 Response to AR 2

4.7.2 Analysis of results QoSB-QoSM From these results several conclusions can be obtained. a) The time needed by the QoSBroker to accept a new client is extremely quick (81 µs) as expected, since this is a very simple operation, and simple decision rules are in place currently. b) The time needed to respond to a Configuration Request is the highest time measured. This is related with the need to check a database on the AR nature and needs. However this does not constitute a problem since, it’s an uncommon task. c) Another interesting comparison is between Accepted Request and Denied ones, which are much faster. This is explained by the fact that Denials do not need to under go under the reservation processing. The same goes for Core Requests where there is no need to check User Profiles. d) With the QoSBroker version used for these tests, the FHO response time is quite high (compared to expected time for full network response), since in these test several changes where done to the QoSBroker code in order to facilitate its profiling. The QoS context installation time measured in section 4.8 must be added to this time to have the total time added by QoS to the FHO. QoSB-A4C.f server Analysing the data in Table 8 we can conclude that the time taken by the QoSBroker to accept AAAC server communications is in the same order of magnitude than the time required to accept a new QoSManager communications. The extra time needed by the QoSBroker to process QoS Profile Definitions is related to the need to process, interpret and store all that information, and is hard to improve from the point of view of implementation. Nevertheless it is not a common message, and corresponds to an action when no real communications is being performed (user registration). The time taken to respond to a NVUP Authorization is 4007 µs, wich is quite acceptable. QoSB-RG

Page 50: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 50 / 86

The time needed to respond to the AR is longer than the time needed to respond to the RG. This has to do with the fact that the RG is contacted before finishing up all reservations, and only then it responds to the AR.

4.8 QoS context installation time in the QoSM in the nAR

4.8.1 Results For 6 filters one installed: Display in QoSM in nAR Read from FHO-module 52 bytes Received START_QOSB_LISTEN!!! UNSOLICITED FHO DECISION RECEIVED O.K. Installing FHO connection number 1 ... CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c Destination IPv6 address= 2001:720:410:1003:2c0:26ff:fea3:7fed DSCP= 0x02 Installing FHO connection number 2 ... CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c Destination IPv6 address= 2001:200:0:8002:203:47ff:fea5:3085 DSCP= 0x02 Installing FHO connection number 3 ... CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c Destination IPv6 address= 3ffe:2620:6:1::1 DSCP= 0x02 Installing FHO connection number 4 ... CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c Destination IPv6 address= 2001:638:202:1:204:76ff:fe9b:ecaf DSCP= 0x02 Installing FHO connection number 5 ... CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c Destination IPv6 address= 2001:720:410:1003:2c0:26ff:fea0:dd21 DSCP= 0x02 FHO: A->C, Next Connection is already installed, nothing to do...: CoA IPv6 address= 2001:720:410:1007:204:e2ff:fe2a:460c Destination IPv6 address= 2001:720:410:1003:2c0:26ff:fea0:dd21 DSCP= 0x22 sending QOS_FHO_ACK to the FHO-module (value QOS_AVAILABLE) Measure is shown in next figure:

Figure 31. QoS context installation time

The time elapsed is about 1 ms: Time between messages No 60 and 59: 14.139s-14.138 s

Page 51: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 51 / 86

For 2 filters one installed: We follow the same procedure than before and using ethereal we find that the time elapsed is 0,4ms. We can assume that the time elapsed is a linear function of the number of filters to install: Time_elapsed=a*number_of_filters_to_install+b Solving the equation system: Time_elapsed=0.15ms/filter*number_of_filters_to_install+0.25ms

Time elapsed to install the QoS Context in the nAR

0,4

1

00,20,40,60,8

11,21,4

0 2 4 6 8

Number of filters to install

tim

e el

apse

d (

ms)

(nu

mb

ers

ind

icat

e va

lues

mea

sure

d, o

ther

val

ues

ar

e in

fere

d)

Figure 32. Time elapsed to install the QoS context in the nAR

4.8.2 Analysis of results It takes 0.15 ms to install a filter. The 0,25 ms seem to be the overhead to process the DEC message, to check if some of the filters transferred are already installed, etc. The QoS context transfer time measured in section 4.7 must be added to this time to have the total time added by QoS to the FHO.

4.9 FHO

4.9.1 Testbed in Madrid

4.9.1.1 Results

Table 9. Summary of measures for FHO (in Madrid)

QoS AAA Orig Dest Measures N D R D R Eth WL WC Eth WL WC OL DL DQ

1 X X X X 16ms 0 14ms 2 X X X X 20ms 0 14ms 3 X X X X 18ms 67 14ms 4 X X X X 18ms 0 - Notes: FHO with dummy modules was performed in [5] and the results are taken from there. See section 3.9.2 to know details of the measures. Ping done each 40ms. Abbreviations: N Number of test D 'dummy' module R 'real' module

Page 52: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 52 / 86

Orig Technology from where the MN starts Dest Technology where the MN arrives Eth Ethernet WL Wireless LAN (802.11) WC WCDMA OL Overall latency DL Data loss DQ Delay of QoS manager - QoS broker communication DA Delay due to communication with the AAA attendant DI Delay caused by IPSec tunnelling DQ: The time to install the filters in the nAR must be added to this value to know the total delay due to QoS. See section 4.8.1 for more details Data loss [root@viudanegra root]# ping6 larva10.ipv6.it.uc3m.es -i 0.04 PING larva10.ipv6.it.uc3m.es(larva10.ipv6.it.uc3m.es) 56 data bytes 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=0 hops=62 time=2.693 msec 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=1 hops=62 time=3.348 msec ***********ok*********** 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=152 hops=62 time=2.647 msec 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=153 hops=62 time=3.118 msec 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=221 hops=62 time=726 usec 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=222 hops=62 time=588 usec ***********ok******** 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=242 hops=62 time=620 usec 64 bytes from larva10.ipv6.it.uc3m.es: icmp_seq=243 hops=62 time=613 usec --- larva10.ipv6.it.uc3m.es ping statistics --- 244 packets transmitted, 177 packets received, 27% packet loss round-trip min/avg/max/mdev = 0.553/5.305/64.011/10.330 ms [root@viudanegra root]#

4.9.1.2 Analysis of results.

Results are in the same range than in Stuttgart (section 4.9.2.1) and DQ is also similar to the results found in Aveiro (section 4.7.1). Tests with the IPv6-in-IPv4 tunnel and added delays between CN and MN showed that bicasting process worked properly, so this added delay did not increase handover latencies, as expected. We should note that we also performed some tests with only Mobile IPv6 (not fast handover) and in this case latencies increased when delays were added. Regarding to the tests presented above, a capture done in a MN while performing an inter-technology handover from wlan to Ethernet is shown in the next figure.

Page 53: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 53 / 86

Figure 33. WLAN to ETH FHO capture

FHO time is just about the same than in other fho types. But there are a lot of packets lost. Fho process is materialized in packets 9,10,11 and 14. It lasts, as written in results, 18ms. The network advertisement sent by the MN to the nAR is done just after the FHO process, to announce the nAR the MAC address of the MN. In the capture presented above, the BU to the HA (packet 25) and to the CN (packet 29) are sent about 2,5 seconds after the FHO process. In order to analyse this problem, which only occurs in the particular WLAN to ETH handover, more detailed tests and evaluation is required. However, the BU delay is not the reason for the packet loss due to the established bicasting. A packet loss of 67 packets, as shown above, at the deployed packet rate of 40ms would result in an interruption time of 2680 ms, which is far beyond the presented results. Due to the complexity of the overall system, more time for tests and evaluation is required to analyse the problem. Extensive test on all modules separately have been successfully performed, however, the evaluation of the integrated overall system requires more time. Since the correct functioning of the architecture has been shown, the benefit of identifying the bug in this particular setting is not considered to be an additional value to the proof of concept.

4.9.2 Testbed in Stuttgart For the tests in Stuttgart we do the QoS negotiation before the measurements by starting a ping from MN or from CN after the registration. The description of the tests performed is the following:

-MN (ksat67) registers into the corresponding network technology -We start monitoring with Ethereal -Start ping6 -MN performs FHO

4.9.2.1 Results We have done two different kinds of tests. Table 10 summarizes the results obtained when the MN performs FHO while it pings the CN. Table 11 summarizes the results obtained when the MN performs FHO while it is being pinged by the CN. We send echo requests every 50ms except for the case where one of the two access technologies involved is WCDMA. For those cases we send echo requests each 300ms.

Page 54: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 54 / 86

All tests are done with real modules. ORIG DEST MEASUREMENTS N Eth WL WC Eth WL WC OL (ms) DL(pkt) DQ(ms) Ping interval (ms)

1 X X 24,26 0 22,64 50 2 X X 18,59 0 14,48 50 3 X X 17,87 0 14,48 50 4 X X 24,766 0 18,93 300 5 X X 301,84 1 16 300 6 X X 25,979 0 22,52 300 7 X X 260,427 37 12,305 300

Table 10: FHO results pinging from MN to CN

ORIG DEST MEASUREMENTS N Eth WL WC Eth WL WC OL (ms) DL(pkt) DQ(ms) Ping interval (ms)

1 X X 24,26 0 22,64 50 2 X X 17,267 42 13,259 50 3 X X 17,87 0 14,48 50 4 X X 23,707 1 18,305 300 5 X X 260,482 36 20,554 300 6 X X 21,675 0 18,257 300 7 X X 260,456 32 18,357 300

Table 11: FHO results pinging from CN to MN

Abbreviations: N Number of test Orig Technology from where the MN starts Dest Technology where the MN arrives Eth Ethernet WL Wireless LAN (802.11) WC WCDMA OL Overall latency DL Data loss DQ Delay of QoS manager - QoS broker communication

DQ: The time to install the filters in the nAR must be added to this value to know the total delay due to QoS. See section 4.8.1 for more details

4.9.2.2 Analysis of results In Table 10 we see that the FHO from WCDMA to any other technology lasts more time than other cases and also that there are some packets lost (tests number 5 and 7).

In Table 11 we can see again that FHO from WCDMA to any other technology needs more time. But we can see some additional cases in which we lose some packets.

In the handover from WLAN to ETH using a ping interval of 50ms, we can see that there are 42 packets lost. That means there is a break in the handover process of around 2.1 seconds.

We can analyze some ethereal captures now for the case FHO from WLAN to ETHERNET.

Page 55: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 55 / 86

Figure 34: View from AR (1)

Figure 35: View from AR (2)

We can see in Figure 34 and Figure 35 ethereal captures from the AR. Packet #304 is the last FHO message. Packet #487 contains the BU sent to the CN. Packet #491 is the BU sent to the HA. We can see that there is a big delay between the time when the MN sends the BU to the HA and CN and the time when the MN receives the last FHO message. This delay is around 2 seconds, and during this time the ping packets are lost. The problem is just the same than in section 4.9.1.2.

Page 56: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 56 / 86

4.10 Paging

4.10.1 Testbed in Madrid

4.10.1.1 Test results

Orig Dest Ov. Meas. Ind. Meas N Eth WL WC Eth WL WC OL D OL A DP DM

1 X X 560*ms-1500**ms

280ms 280ms

Table 12 Results for paging tests

User-data traffic source: Ping from CN is done each 40ms Abbreviations: N Number of test Orig Technology where the MN enters dormant mode Dest Technology where the MN exits dormant mode Eth Ethernet WL Wireless LAN (802.11) WC WCDMA OL D Overall latency for a ping being the MN in dormant mode (echo reply arriving *in signalling packet, ** in data packet) OL A Overall latency for a ping (the MN is always active) DP Delay between arrival of initial packet at PA to forwarding of first buffered packet DM Delay between MN receiving paging request and sending of deregistration (dormant request with lifetime 0) Packets lost: [root@viudanegra root]# ping6 larva9.ipv6.it.uc3m.es -i 0.04 PING 2001:720:410:1003:204:75ff:fe7b:921f(2001:720:410:1003:204:75ff:fe7b:921f) from 2001:720:410:1002::81 : 56 data bytes 64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=14 hops=62 time=3.210 msec 64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=37 hops=62 time=3.639 msec 64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=38 hops=62 time=3.120 msec **********ok********* 64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=68 hops=62 time=3.145 msec 64 bytes from 2001:720:410:1003:204:75ff:fe7b:921f: icmp_seq=69 hops=62 time=3.127 msec --- 2001:720:410:1003:204:75ff:fe7b:921f ping statistics --- 70 packets transmitted, 34 packets received, 51% packet loss round-trip min/avg/max/mdev = 2.928/10.278/77.267/18.459 ms [root@viudanegra root]# Direct delays: HA -> PA PA -> MN MN -> HA MN -> CN 0,13 ms 2,7 ms 2,7ms 3ms

4.10.1.2 Analysis of test results

Loss of first packets (till the 37) is expected, since re-activating a mobile terminal from dormant mode implies registering with the network. Registration with the network (authentication, authorization and location updating) has been synchronized with the paging implementation to avoid packet loss due to forwarding packets without being registered properly before. Furthermore, to allow a mobile terminal sending packets towards the network infrastructure, the associated Access Router has to negotiate the QoS for the sending mobile terminal with the domain’s QoS Broker. Until this negotiation is not done, packets sent by the MN are dropped –except signalling ones- by the QoSM in the AR. This QoS negotiation process is analyzed in D0503. Packet 14 is not lost because it contains the BU and thus it is a signalling packet. So paging performs as expected. But a detailed analysis of the captures is very interesting and worth the writing. View from aranya (PA and A4C server) some packets not displayed for ease of understanding:

Page 57: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 57 / 86

Figure 36. Paging test (in Madrid) capture (view from PA and AAA server)

First echo redirected from escarabajo, the HA is packet #3. The PA starts paging to the paging area where the MN is (packets 4,5,6). The answer from the AR where the MN is attached to is received in packet 33. Now the PA sends the buffered packets to the MN (packets 34,35,36,37,38). Beefore the packet 33, the MN has AAA registered: packets 24,25 (NVUP dumped from A4 server to QoSB in packet 26), 29 and 30. Packets 39,40, and 41 are auditing packets. View from MN (some packets not displayed for ease of understanding)

Page 58: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 58 / 86

Figure 37. Paging test (in Madrid) capture (view from MN)

MN receives paging messages from its AR (messages 1 and 2). Then it initiates the AAA registration process (messages 3,5,8 and 9). Once the AAA registration process done, MN does the BU to the HA (messages 10 and 14). Message 10 can go through the QoSM in the AR without the need of QoS negotiation because it is a signalling message. Once the two above steps completed, the MN sends a paging message to the AR (message 16). Then the PA sends the MN the buffered packets (17, 18) and the MN answers. But the echo replies will be blocked by the QoSManager in current MN Access Router (AR) and not reach the CN until the QoS negotiation process completes in the AR. Exception is echo reply Nº 22, who caries the BU to the CN. This packet is treated as signalling and thus can go through the QoSM. It is marked by DiffServ marking soft as signalling (DSCP=0x22, erroneously displayed by ethereal as 0x88). Packet 27 is the first non signalling packet to reach the CN (because the QoS negotiation process has already concluded).

Page 59: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 59 / 86

DP components (DP=280 ms see Table 12): AAA registering: 200 ms see section 4.4 for an analysis of this time View from AR (capturing in core interface)

Figure 38. Paging test (in Madrid) capture (view from AR core interface)

Messages 3,4,5,6 and 7 are paging messages seeking the dormant MN. Messages 8, 9, 10 and 11 are AAA registration messages. Messages 12 and 13 are BU to HA. Message 12 was not blocked by QoSM because it is signalling. Message 14 is message to PA telling that the MN is awaked and registered. When the PA receives message 14 it starts sending the buffered packets to the MN (messages 15, 16,…) those messages are received by the MN but the answers are blocked till the QoS negotiation takes place. Message 18 was not blocked by QoSM because it is signalling (contains BU to CN). Messages 23, 26 and 27 are QoS negotiation messages. The time taken here is exceptionally high, the normal value being 4 ms as written in section 4.7.1. Filters are installed in QoSM, then the RPT message is sent. QoS negotiation process has finished and now the messages from MN can go through: messages 29, 31, … Messages 39, 40 20 and 21 are auditing messages. View from escarabajo (QoSB and PA):

Page 60: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 60 / 86

Figure 39. Paging test (in Madrid) capture (view from HA and QoSB)

message 3 is the first echo received. The HA redirects it to the PA (packet 4) packet 7 is the NVUP from A4C server to QoSB. Packets 10 and 11 are the BUs. Packets 15,16,17 are QoS negotiation messages. The time taken here is exceptionally high, the normal value being 4 ms as written in section 4.7.1. View from CN:

Figure 40. Paging test (in Madrid) capture (view from CN)

Page 61: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 61 / 86

The first echo is message 4. No echo reply will be received until the paging, the AAA registration and BU to HA and QoS procedures are completed by the MN and the QoSM in the AR. Message 13 is an echo reply with a BU to the CN, that way it is handled as signalling and can go through the QoSM in the AR before the QoS negotiation process is done. The first non signalling packet to reach the CN is packet Nº 22.

4.10.2 Testbed in Stuttgart

4.10.2.1 Test results

Orig Dest Ov. Meas Ind. Meas. N Eth WL WC Eth WL WC OL (ms) DP(ms) DM(ms)

1 x x 669,639* 5712,304** 1857,708 1730,317 2 x x 0,670* 342,359** 341,403 339,234 3 x x 122.210* 2031,277** 1271,524 1136,617 4 x x 5 x x

Table 13: Results for paging test

Abbreviations: N Number of test Orig Technology where the MN enters dormant mode Dest Technology where the MN exits dormant mode Eth Ethernet WL Wireless LAN (802.11) WC WCDMA OL D Overall latency for a ping being the MN in dormant mode (echo reply arriving *in signalling packet, ** in data packet) DP Delay between arrival of initial packet at PA to forwarding of first buffered packet DM Delay between MN receiving paging request and sending of deregistration (dormant request with lifetime 0) Ping from CN is done each 300ms

4.10.2.2 Analysis of test results For the tests in Stuttgart, as already mentioned, QoS is negotiated before doing the measurements. The consequence of this is that all the packets buffered in the Paging Agent go through the QoSM, and in some cases this leads to a reordering in the ping replies arriving at the CN. We can analyse this behaviour in a detailed view of the captures. Test #1: MN active in ETH, go dormant and wake-up in WCDMA The output of ping with interval 300 ms at the CN is the following: [root@ksat46 root]# ping6 2001:638:202:11:20b:dbff:fe14:317f -i 0.3 PING 2001:638:202:11:20b:dbff:fe14:317f(2001:638:202:11:20b:dbff:fe14:317f) from 2001:638:202:11:204:76ff:fe13:b146 : 56 data bytes 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=5 ttl=63 time=669 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=6 ttl=63 time=955 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=1 ttl=63 time=2712 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=7 ttl=63 time=884 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=2 ttl=63 time=2455 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=3 ttl=63 time=2204 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=4 ttl=63 time=2012 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=8 ttl=63 time=793 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=9 ttl=63 time=524 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=10 ttl=63 time=260 ms ****OK**** 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=50 ttl=63 time=136 ms 64 bytes from 2001:638:202:11:20b:dbff:fe14:317f: icmp_seq=51 ttl=63 time=128 ms --- 2001:638:202:11:20b:dbff:fe14:317f ping statistics --- 51 packets transmitted, 51 received, 0% loss, time 15468ms rtt min/avg/max/mdev = 128.658/373.035/2712.362/611.467 ms, pipe 9 We can see how all the ping replies arrive, but there was a reordering. The first echo reply has icmp_seq number 5, and it has the BU from the MN. In Figure 41 we can see the captures from CN:

Page 62: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 62 / 86

Figure 41: View from CN

Packet #2 is the echo request with ICMP_SEQ = 1 Packet #7 is the echo request with ICMP_SEQ = 5 Packet #8 is the echo request with ICMP_SEQ = 6 Packet #10 is the echo reply with ICMP_SEQ = 5, which includes the BU Packet #13 is the echo request with ICMP_SEQ = 6 Packet #14 is the echo reply with ICMP_SEQ = 1 From this capture, we see that echo requests from 1 to 4 were buffered at the PA. Echo reply 5 contains the BU from the MN and is marked as signalling. Echo request 6 and the following ones are not buffered at the PA. Now in Figure 42 we can see the capture of the PA:

Page 63: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 63 / 86

Figure 42: View from PA

Packet #7 is the echo request with ICMP_SEQ = 1, and it is buffered at the PA. After the reception of the first ping request, the Paging Agent starts polling the paging area (packets #8, #9, #10 and #11 sent to the different ARs). In the meantime, echo requests with ICMP_SEQ 2, 3 and 4 arrive at the PA (packets #12, #17 and #23) and they are also buffered. PA gets the paging message answer from ksat73 (WCDMA-AR) in packet #28. At this time PA forwards the 4 packets buffered (packets #29, #30, #31 and #32) which arrive in the MN disordered. In the MN we can see the following sequence (Figure 43):

Page 64: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 64 / 86

Figure 43: View from MN

Packet #59 is the paging answer from the MN to the WCDMA AR. Packet #60 is the echo request with ICMP_SEQ = 5 Packet #62 is the echo request with ICMP_SEQ = 6 Packet #64 is the echo request with ICMP_SEQ = 1 which was buffered at the PA and that is why this request arrives disordered at the MN.

4.11 Inter-domain Handover

4.11.1 Results

Table 14. Summary of measures for Inter-domain handover

QoS AAA Orig Dest Inter-D Intra-D Measure N D R D R Eth WL WC Eth WL WC PL

1 X X X X X 0 2 X X X X X 3 Ping is done each second Abbreviations: N Number of test D 'dummy' module R 'real' module Orig Technology from where the MN starts Dest Technology where the MN arrives Eth Ethernet WL Wireless LAN (802.11) WC WCDMA Inter-D Inter-domain Handover Intra-D Intra-domain Handover (FHO) PL Packet loss The results of ping from MN to CN are shown as following: ? Intra-domain handover (FHO). Ping interval = 1 second.

[root@ksat54 root]# ping6 ksat46.ipv6.rus.uni-stuttgart.de

Page 65: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 65 / 86

PING ksat46.ipv6.rus.uni-stuttgart.de(ksat46.ipv6.rus.uni-stuttgart.de) 56 data bytes 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=0 hops=61 time=2.818 msec 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=1 hops=61 time=2.741 msec ****OK**** 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=20 hops=61 time=2.500 msec 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=21 hops=61 time=2.404 msec --- ksat46.ipv6.rus.uni-stuttgart.de ping statistics --- 22 packets transmitted, 22 packets received, 0% packet loss round-trip min/avg/max/mdev = 2.404/2.816/3.299/0.224 ms

? Inter-domain handover

[root@ksat54 root]# ping6 ksat46.ipv6.rus.uni-stuttgart.de PING ksat46.ipv6.rus.uni-stuttgart.de(ksat46.ipv6.rus.uni-stuttgart.de) 56 data bytes 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=0 hops=61 time=2.976 msec 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=1 hops=61 time=2.528 msec ****OK**** 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=8 hops=61 time=2.823 msec 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=9 hops=61 time=2.775 msec 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=12 hops=61 time=252.822 msec 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=13 hops=61 time=2.897 msec ****OK**** 64 bytes from ksat46.ipv6.rus.uni-stuttgart.de: icmp_seq=20 hops=61 time=3.083 msec --- ksat46.ipv6.rus.uni-stuttgart.de ping statistics --- 21 packets transmitted, 19 packets received, 9% packet loss round-trip min/avg/max/mdev = 2.449/16.579/252.822/55.708 ms

4.11.2 Analysis of results The test is performed as expected. When the MN enters a new domain, it must perform a re-registration on its AAA home server and its home agent. Also there is no QoS context transfer. Before the new Care of Address is valid and the QoS negotiation takes place, some packets are lost. For Fast Handover (see also section 4.9), there is no packets lost.

5. User tests

5.1 VoIP

5.1.1 Description of the test In this case, users able to perform calls between themselves, evaluating the quality of the service (sound quality, delays, and interruptions). Also, it will be possible to measure degradation of service -if any- introduced by handovers; and -of course- to distinguish between classes of users. People / site 2~2 Software RAT Machines 1 server, 1 MN for each client Time per session 10~30 min User evaluation Ease of usage

Voice quality Expert evaluation RTCP flow description: delay, jitter, packet loss Concrete tests Test 1 All users with the same high quality (EF), no other

traffic exists. Calls between Madrid and Stuttgart. FHO in Madrid.

Test 2 All users with the same -lowest- quality, no other traffic exists. Local calls in Madrid. FHOs

5.1.2 Realization Test beds: Madrid and Stuttgart

Page 66: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 66 / 86

AAAAC (with no Auditing) Metering QoS system (diff serv, QoSM, QoSB) FHO system Mobile IPv6 in Madrid and Stuttgart MN in Madrid and MN in Stuttgart. User profiles: Both high quality users have EF service in their profile (mn_config file in AAAAC.h). Both low quality users have the EF substituted by S4 (BE). The S4 parameters have been changed for this test: The BW was changed from 32 kbps to 16 kbps. To do that we change the qos_config file in Madrid AAAAC server (no need to change that in Stuttgart since this test bed is not involved in the second test). This file is transferred from AAAAC server to QoSB and this configures the QoSM in the ARs. qos_config file employed: #DstAd BW BurstSize Priority DSCP ::0 32 11514 0 0 ::0 64 11514 0 2 ::0 256 11514 0 4 ::0 32 11514 1 2e ::0 1 11514 2 22 ::0 256 11514 3 12 ::0 64 11514 4 0a ::0 128 11514 4 0c ::0 256 11514 4 0e Both MNs have EF as class of service for RAT (Universal MD DiffServ Marking rules) EF service in both domains has the same characteristics (MD defined) No roaming, both users are in their home domains. DiffServ Marking software in both MN marks RAT with EF service. WLAN-WLAN FHO in Madrid. No FHO in Stuttgart RAT parameters: GSM codec 13.2 kbps. Frames Cod. Delay Eff BW 1 20 ms 22% 60 kbps 2 40 ms 35% 38 kbps 4 80 ms 52% 25 kbps <-- Best choice? Chosen 8 160 ms 68% 19 kbps Test sequence: First test Both high priority users AAA register (including NVUP transfer) + MIPv6 register using NCP. One is in Madrid the other in Stuttgart. The phone call is started (using RAT) FHOs of the MN in Madrid Both users deregister. Second test: Both low priority users AAA register (including NVUP transfer) + MIPv6 register using NCP. Both are in Madrid. The phone call is started (using RAT) FHOs of the MNs Both users deregister.

Page 67: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 67 / 86

5.1.3 Results As it was expected, the perceived quality was very good in both directions. The RTP statistics showed also a very good performance (0% packet loss and a delay of about 180ms). However, a tiny but appreciable delay was introduced. For the second test, the quality was poor, but the conversation could still take place. The RTP statistics showed also a medium performance (20% packet loss and a delay of about 180ms). During the tests the platform stability was very good, without many problems. Conversations were very long without any quality degradation. 6 FHO took place and the users did not noticed any quality degradation except in the 6th FHO were the user in Madrid stop hearing the user in Stuttgart for about 5 seconds.

5.1.4 Expert evaluation: Characterization of RAT traffic pattern A GSM coded was used in the tests with RAT, which generates about 13.2kbps of UDP payload (33 bytes every 20ms). Nevertheless a rate of 32kbps was not enough, as it has been shown in Test 1. In this test 48% of the packets were lost, so it seemed that a bandwidth of about 64kbps is required. In order to measure the traffic pattern, NISTNET emulator was used in two border routers of Madrid trial site. NISTNET allows us to emulate some network characteristics (delay, packet loss, etc), but it works only in IPv4, so a IPv6-in-IPv4 tunnel was created between these two routers. This tunnel does not affect the test performance with the exception of a small added delay due to IPv6-in-IPv4 tunneling. After some test we can say that the bandwith needed by RAT is about 64kbps (the same result we obtained from RAT statistics using a 32kbps allowed bandwith). RAT generates about 13.2 kbps of application traffic, but the overall IPv6 rate is about 64kbps. This is due to IPv6 basic header, IPv6 Home Address dst. option, Routing Header, UDPv6 header, and RTP header:

Ethernet IPv6 basic header

IPv6 Routing header

Home Address destinatio. option

UDP header RTP header

VoIP payload (GSM codec)

14 bytes 20 bytes 24 bytes 24 bytes 8 bytes 12 bytes 33 bytes Mobile IPv6 signalling adds a significant amount of overhead (48 bytes). Therefore, it is interesting note that EF Moby Dick class (32kbps peak rate) defined for real time services maybe is not suitable when mobility is involved.

5.2 Video Streaming

5.2.1 Description A point to point video streaming scenario is provided. Users are able to play remote video files, evaluating the quality of reception in different circumstances. Low priority queue in one cell will become saturated when a user receiving low priority traffic moves in. Low priority users in that cell will be affected. People / site 2~1 Software VideoLAN server an clients Machines 1 server, 1 MN per user Time per session 10 min User evaluation Quality of audio, video Expert evaluation RTCP parameters (jitter, delay, loss) Concrete tests Test 1 Two kind of users, high priority and low. no

Page 68: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 68 / 86

background traffic. FHOs Test 2 Perform the same tests as before, but adding a

"large enough" low priority background traffic directed to the high priority user.

5.2.2 Test Realization MNs (coleoptero and larva10) are attached to WLAN ARs (temita and cucaracha). The video stream server is a CN –chinche- attached to pulga, an Ethernet AR. No MD software runs is pulga. Background traffic is generated by the CN. We use mgen with packet size 300 bytes (udp level), 1000 packets/s. Video is ice age trailer (about 500 kbps). Application is videolan DiffServ Marking rules in the CN: Filter 1 Description VideoHighPrio UDPDestPort 1234 CoS 101110 Filter 2 Description VideoLowPrio UDPDestPort 5678 CoS 010010 Filter 3 Description NoiseLowPrio UDPDestPort 33333 CoS 010010 For the first test (with no background traffic) both users see the video while performing FHOs. FHOs in coleptero are triggered because this MN is moving. Larva10 does no move and to trigger the FHOs we change the signal thresholds. For the second test, we present the test sequence bellow. Step 1 Larva10 attached to cucaracha, coleoptero to termita Larva10 receives a video from chinche with dscp=0x12 (AF) low priority coleoptero receives a video from chinche with dscp=0x2e (EF) high priority coleoptero receives noise from chinche with dscp=0x12 Step 2 coleoptero moves to cucaracha Step 3 (not included in the demo plan) coleoptero moves back to termita

Page 69: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 69 / 86

Figure 44. Video streaming test

5.2.3 Results When no background traffic exists, both video display good. FHO are not appreciated by users. When we add the background traffic, and in the case it competes for resources with the low priority video (step 2), the quality of the low priority video gets very poor. Indeed, the video stops completely. High priority video is not affected. When the low priority video does not compete anymore for resources with background traffic (step 3) its quality increases and becomes very good and just like the quality of the high priority video. The quality of the low priority video takes a while to recover, corresponding to the time needed to resync the video.

5.3 Internet radio

5.3.1 Description A streaming server is installed so as to provide an uninterrupted flow of music. Users are able to listen to this stream, evaluating the quality of the sound in normal conditions and while handovers take place. It is possible to have different kind of users, receiving different priority streams People / site 2~1 Software Streaming server, clients Machines 1 server, 1 MN per user Time per session 30 min Test duration User evaluation Quality of sound

Billing model Expert evaluation Behaviour of TCP during FHO Concrete tests

Video streaming test: step 1

IPv6-WLAN

•Home Agent•QoS Broker•DNS

IPv6-WLAN2

termita

GEANT

MOBYDICKDOMAIN

escarabajo

pulga

escorpion

2001:0720:0410:1004::/64

2001:0720:0410:1003::/64

2001:0720:0410:1001::/64

eth1

eth1

eth2

2001:0720:0410:1006::/64

2001:0720:0410:1007::/64

eth1

eth1

ColeopteroWLAN

grilloCISCO 7500

CN

chinche

MN

eth0

6bone

SimulatedCORE

cucarachaviudanegra

aranya

•Paging Agent•AAA server

WLA

N

MN

larva10

larva9

WL

AN

CN

PHB, queues

Noise

Video

DSCP=0x2e

DSCP=0x12

No MD soft in pulga!!!

PHB only forpackets from coreto access!!

Video streaming test: step 2

IPv6-WLAN

•Home Agent•QoS Broker•DNS

IPv6-WLAN2

termita

GEANT

MOBYDICKDOMAIN

escarabajo

pulga

escorpion

2001:0720:0410:1004::/64

2001:0720:0410:1003::/64

2001:0720:0410:1001::/64

eth1

eth1

eth2

2001:0720:0410:1006::/64

2001:0720:0410:1007::/64

eth1

eth1

Coleoptero

WLAN

grilloCISCO 7500

CN

chinche

MN

eth0

6bone

SimulatedCORE

cucarachaviudanegra

aranya

•Paging Agent•AAA server

WL

AN

MN

larva10

larva9

WL

AN

CN

PHB, queues

Noise

Video

DSCP=0x2e

DSCP=0x12

No MD soft in pulga!!!

PHB only forpackets from coreto access!!

This queuebecomessaturated

Video streaming test: step 3

IPv6-WLAN

•Home Agent•QoS Broker•DNS

IPv6-WLAN2

termita

GEANT

MOBYDICKDOMAIN

escarabajo

pulga

escorpion

2001:0720:0410:1004::/64

2001:0720:0410:1003::/64

2001:0720:0410:1001::/64

eth1

eth1

eth2

2001:0720:0410:1006::/64

2001:0720:0410:1007::/64

eth1

eth1

ColeopteroWLAN

grilloCISCO 7500

CN

chinche

MN

eth0

6bone

SimulatedCORE

cucarachaviudanegra

aranya

•Paging Agent•AAA server

WL

AN

MN

larva10

larva9

WL

AN

CN

PHB, queues

Noise

Video

DSCP=0x2e

DSCP=0x12

No MD soft in pulga!!!

PHB only forpackets from coreto access!!

Page 70: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 70 / 86

Test 1 high priority radio. Users listen to 3 or 4 songs and do FHOs.

Test 2 Same as before, but half the users receive premium radio stream, and half low priority

5.3.2 Realization The MNs are attached to WLAN ARs. The configuration of the queues in the QoSMs in the ARs is changed for this test to: *************************************************************************************************** **** IPv6 QoS/Policing ACCESS ROUTER [MobyDick] (COPS CLIENT, 2 interfaces) version 3.7.28 **** **************************************************************************************************** [root@termita root]# CONFIGURATION DECISION RECEIVED O.K. -------------------------------------------- BEHAVIOUR TABLE received: DSCP | Agre. | Bandwidth | BW Borrow | RIO min queue | RIO max queue | RIO limit queue| RIO drop| | Number| (kbps) | flag | length (kB) | length (kB) | length (kB) | prob.(%)| -------------------------------------------------------------------------------------------------- 0x00 | 0 | 300 | 1 | 0 | 0 | 0 | 0 | 0x2e | 1 | 100 | 0 | 0 | 0 | 0 | 0 | 0x22 | 2 | 100 | 0 | 50 | 100 | 120 | 10 | 0x12 | 3 | 900 | 0 | 250 | 500 | 600 | 10 | 0x0a | 4 | 300 | 0 | 150 | 300 | 400 | 10 | 0x0c | 4 | 200 | 0 | 100 | 200 | 250 | 25 | 0x0e | 4 | 100 | 0 | 50 | 100 | 120 | 50 | -------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ** TCAPI QUEUES READY!!... Sending to BB a CONFIGURATION REPORT COMPLETED message ** ------------------------------------------------------------------------------------

Change is done in QoSB database and this configures the QoSM with the new values. The Radio Stream Server is a CN attached to pulga, an Ethernet AR. No MD software is running in pulga. low priority radio is marked 0x2e high priority radio is marked 0x12 The program used is XMMS. The file sent is an mp3 file at 128 kbps. TCP is used to transfer the audio stream. The marking rules in the stream server are: Both users (cjbc and acuevas) have 0x2e and 0x12 services in their profiles. The payment scheme for both is the same (look HDTariffS1 and HDTarrifS2:

Figure 45. User profiles

The charging applied is for DSCP=46 and HDTariffS1=10 chargeH = (int)(100*(bytestoH+bytesfromH)/1048576); The charging applied is for DSCP=18 and HDTariffS2=21 chargeH = (int)(250*(bytestoH+bytesfromH)/1048576); Their session life time is 150s. Initially both users choose to listen to the high priority stream. They do FHO. No accounting is done in this test.

Page 71: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 71 / 86

In the second test acuevas listens to the low priority stream and cjbc to high priority. Accounting is done. acuevas and cjbc are attached to termita. cjbc moves from termita to cucaracha and back. Users start there Moby Dick sessions and immediately after they start to receive the audio stream. When they are done, they stop the audio stream and close the Moby Dick session (pushing Deregistration button in NCP and thus stopping the periodic re-registration).

5.3.3 Results When both users listen to high priority stream, the quality is very good, the FHOs are not appreciated. When acuevas chooses to listen to low priority stream the quality is low, with constant cuts. cjbc keeps listening to good quality music while doing some FHO. Next figures show charging information of acuevas and cjbc:

Figure 46. Total charging for [email protected]

Page 72: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 72 / 86

Figure 47. Total charging for [email protected]

In the next figures, one day charging info is shown:

Figure 48. One day charging info for [email protected]

Page 73: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 73 / 86

Figure 49. One day charging info for [email protected]

User cjbc performs some handovers as it is shown in the figure. cjbc has different sessions in both cucaracha and termita ARs. Since cjbc does some FHOs, some sessions are stopped before the Lifetime of the session expires. In the next figures charging details from one session of each user are shown:

Figure 50. Charging acuevas session details

Page 74: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 74 / 86

Figure 51. Charging cjbc session details

User cjbc receives high priority traffic and acuevas receives real time traffic, which is cheaper. As it can be seen, cjbc receives much more traffic than acuevas –because high priority traffic has more BW allocated than real time traffic- and the DSCP employed (18=0x12) is more expensive so the total amount to pay is much more than for acuevas. Details of these types of traffic are shown in the next figures:

Figure 52. Service Specifications (SLS)

5.3.4 Expert evaluation: TCP during FHO A small issue related to TCP and FHO was detected during the tests. TCP sometimes tries to fill (dpending on the application) the MSS (Maximum Segment Size), so it generates packets that fills the PMTU between the nodes involved in the communication. If this happens (e.g. MP3 streaming uses TCP), whe a FHO is performed, oAR bicasts packets to nCoA, encapsulating them in a tunnel. This tunnel a new IPv6 header to the packets that can go beyond the PMTU between the oAR and the nAR, being these packets dropped (because of Packet Too Big). This error makes the TCP connection to be aborted. This problem should be solved in a future release of FHO software, but in order to make tests with TCP applications, a solution is to decrease the MTU of the interface of the CN.

5.4 Internet coffee Now it is considered the simulation of an actual environment, where different kinds of users accede to a "cyber coffee", using the Internet while moving and having long inactivity periods. Users can do web surfing, file downloading, or playing chess games. Real time constraints are very loose. It should be noted that there are only a few of IPv6 web servers available. People / site 1~4

Page 75: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 75 / 86

Software Web server, ftp server, IM server, clients Machines 1 server, 1 MN for each client Time per session 30 min User evaluation Navigation speed Expert evaluation AAAC issues

Bandwidth consumed FHO signalling procedures

Concrete tests Test 1 Users should both navigate along some web pages,

download some files, some of them could do a chess match. Test system stability despite mobility and paging. Users should change their point of attachment 5 or 6 times and go to dormant mode when they are in long activity period.

5.4.1 Results Users were satisfied with the performance of the service provided. Alas, the low QoS requirements of the applications and the absence of multimedia elements impede "spectacular results", so user's comments were related to the charging model applied. The ability to perform real-time consults of committed consumption was highly appreciated, as the user is able to asses if the service is worth the money spent. Due to both the low QoS requirements and the "traditional nature" of the considered applications (users are used to ISPs low-guaranteed provision), "best effort" service is the best suitable for this kind of services. A better (and more expensive) service adds almost nothing to the user utility function. Successful handovers and paging procedures pass unnoticied to users.

5.5 Quake 2 championship With this test, a proper -and amusing- test environment is provided. Up to four players on each trial site will be able to fight between themselves. FHO will occur and its effects on game performance will be measured. People / site 1~4 players Software Quake 2 server, clients Machines 1 server, 1 MN for each client Time per session ~30 minutes User evaluation Game performance (via inquiry) Expert evaluation Charging, accounting

(with a probe flow) Round trip time, packet loss Concrete tests Test 1 Users should Start playing and get used to the

game. Test then system stability despite mobility. Users should change their point of attachment 5 or 6 times during a game.

Remarks: - if game is played between two teams, they should be balanced among Madrid and Stuttgart (because players close to the server earn advantage among others) - each Quake client generates and receives approximately 15 kbps of traffic

Page 76: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 76 / 86

5.5.1 Results Users can play Quake2 and perform handovers (both inter and intra technology) without being aware of movement. Game performance is not affected at all by mobility. Quake2 is a very good example of 4G application. It’s a real time interactive game, in which some players are involved. It’s also a good application in order to evaluate FHO performance due to its real time requirements. User evaluation (more than 10 students were involved on the tests) shows that FHO performance is really good, with users not perceiving mobility of their terminals.

Figure 54. Quake2 screenshot

Ethereal

"Configurable Link" UC3M-U.Stutt

Q2 Server

MNs

MNs NISTNet

Ethereal

MGEN

DREC

Figure 53. Scenario for Quake 2 test (note: Moby Dick "core nodes" (QoSB, AR, AAAC...) are not included on the figure, for the sake of simplicity.

Page 77: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 77 / 86

6. Conclusions The results presented show that, not only Moby Dick fulfilled its goals, but they also demonstrate that the performance obtained is very high. Moby Dick offers the infrastructure of a 4G network provider over which packets can be transported with QoS, security, mobility and packet based charging, and allowing any kind of IPv6 application to use Moby Dick infrastructure. The promising results obtained will allow follow up projects where, for instance, a Moby Dick network provider can play the role of an aggregator of services.

Page 78: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 78 / 86

7. References [1] [Moby Dick] Moby Dick Web Site: “http://www.ist-mobydick.org/” [2] [D0101] Moby Dick Deliverable: "Moby Dick Framework Specification", delivered in October 2001, available at:

“http://www.ist-mobydick.org/deliverables/D0101.zip” [3] [D0102] Moby Dick Deliverable: "Moby Dick Application Framework Specification", delivered in August 2002,

available at: “http://www.ist-mobydick.org/deliverables/D0102.zip” [4] [D0103] Moby Dick Deliverable: "Moby Dick Consolidated System Integration Plan", delivered in june 2003,

available at: “http://www.ist-mobydick.org/deliverables/D0103.zip” [5] [D0201] Moby Dick Deliverable: "Initial Design and Specification of the Moby Dick QoS Architecture", delivered in

April 2002, available at: “http://www.ist-mobydick.org/deliverables/D0201.zip” [6] [D0301] Moby Dick Deliverable: "Initial Design and Specification of the Moby Dick Mobility Architecture", delivered

in April 2002, available at: “http://www.ist-mobydick.org/deliverables/D0301.zip” [7] [D0302] Moby Dick Deliverable: "Mobility Architecture Implementation Report ", delivered in December 2002,

available at: “http://www.ist-mobydick.org/deliverables/D0302.zip” [8] [D0401] Moby Dick Deliverable: "Design and Specification of an AAAC Architecture Draft on administrative,

heterogeneous, multi-provider, and mobile IPv6 sub-networks", delivered in December 2001, available at: “http://www.ist-mobydick.org/deliverables/D0401.zip”

[9] [D0501] Moby Dick Deliverable: "Definition of Moby Dick Trial Scenarios", delivered in February 2002, available at: “http://www.ist-mobydick.org/deliverables/D0501.zip”

[10] [D0501 Annex] Moby Dick Deliverable Annex: "Definition of Moby Dick Trial Scenarios Annex", delivered in September 2002, available at: “http://www.ist-mobydick.org/deliverables/D0501-Annex.zip”

[11] [D0502] Moby Dick Deliverable: "First Test and Evaluation Report", delivered in April 2002, available at: http://www.ist-mobydick.org/deliverables/D0502.zip

[12] [D0503] Moby Dick Deliverable: "Second Test and Evaluation Report", delivered in January 2003, available at: http://www.ist-mobydick.org/deliverables/D0503.zip

[13] [Moby Summit] http://www.it.uc3m.es/mobydick [14] [prism] “http://www.intersil.com/design/prism/index.asp” [15] [Ethereal] Network Analyzer, available at: “http://www.ethereal.com” [16] [diameter] Stefano M. Faccin, Franck Le, Basavaraj Patil, Charles E. Perkins, “Diameter Mobile IPv6 Application,

internet-draft, draft-le-aaa-diameter-mobileipv6-01.txt, November 2001 [17] [diffserv] http://diffserv.sourceforge.net/ [18] [interlink] http://www.interlinknetworks.com/ [19] [NeTraMet] http://www2.auckland.ac.nz/net/Accounting/ntm.Release.note.html [20] [sals] “COPS Usage for Outsourcing Diffserv Resource Allocation” <draft-salsano-cops-odra-00.txt>, Stefano Salsano,

October 2001. [21] [tcapi] http://oss.software.ibm.com/developerworks/projects/tcapi/

Page 79: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 79 / 86

8. Annex: Public Demonstrations

8.1 Mobile Summit in Aveiro, Moby Dick Demonstration In the 2002 IST Mobile & Wireless Communications Summit, which took place in Aveiro/Portugal, the Moby Dick Project presented a test bed integrating all its key elements, with the exception of TD-CDMA. From June 15 to 18, the PTIn test bed was moved from the IT labs to the Summit site, in order to provide complete demos to visitors and provide them with a “Moby Dick Project” experience.

Picture 1 - Booth 1 @ IST Mobile & Wireless Communication Summit

The demos included video broadcast in a Wireless LAN Environment with FHO occurring during this broadcast. Visitors where invited to accompany the status of the demo through 2 visualization tools specifically developed for this demo. The visualization tools showed the current location of the MN, and a list of current allocated resources, and where used latter on by the project in more recent demos.

The demo consisted of 2 users. A user with a high quality profile called John Goode and a second user with less quality profile with named Mary Sheep. A first login using Mary’s profile showed a video with several failures. After reregistering with John’s login the video would run smoothly even during the FHO. The Visitor was able to acknowledge the registrations and the reservations made in the network through a visualization tool of the QoS Broker.

Picture 2 - Booth 2 @ IST Mobile & Wireless Communication Summit

Page 80: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 80 / 86

Picture 3 - FHO Occurring while one of our operators brings the MN from one booth to the other

During the Handover the visitor could follow the MN movements through the visualization tool, thus understanding the moment the FHO had occurred although no losses where found in the video.

This demo attracted the attention of several Summit participants, through ongoing demos during all summit period. National Press covered the Summit, having Project Moby Dick grabbed their attention. In the end a small interview was presented on national television news bulletin (Video available at http://mobydick.av.it.pt/videos/Summit2003-1.wmv).

8.2 Moby Dick Summit in Stuttgart The project decided to combine the final audit with a project summit. Stuttgart was chosen as the place for this event. This summit has officially closed the 6-month Moby Dick field trial. During this field trial, master students from University Carlos III de Madrid and University of Stuttgart have worked in a mobile network environment comprising Ethernet, WLAN and – in Stuttgart – a TD-CDMA test bed. The event presented speakers from industry, operators and Academia. During the event the following demonstration was shown to the visitors.

o A user registers first on a TD-CDMA access router by using a TD-CDMA aware MN. o Live video was sent from a CN to MN by using VIC. o Seamless handover was performed between TD-CDMA, WLAN and Ethernet. o During the handover video keeps sending without broken. o Charging results were shown at the end of the demonstration. o Two visualizition tools were used to show the FHO scenario and the current location of MN. o See the following pictures.

Page 81: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 81 / 86

Figure 55: TD-CDMA Antenna

Figure 56: Mobile Terminal with Live Video

Page 82: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 82 / 86

Figure 57: Visualization of FHO Scenario

Page 83: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 83 / 86

Figure 58: Visualization of MN Location

Page 84: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 84 / 86

Figure 59: Charging Results

The summit itself was arranged in the following: 09:00 - 09:30 Registration 09:30 - 10:00 Opening, Keynote Speech Mobility Challenges – Part 1

Presentation on Running and Closing Projects - OverDRiVE (Ralf Tönjes, Ericsson) - CREDO (Hong-Yon Lach, Motorola)

11:00 - 11:15 Coffee break 11:15 - 12:30 Mobility Challenges – Part 2

Presentation on Running and Closing Projects - MODIS (Michel Mazzella, Alcatel Space) - Moby Dick (Jürgen Jähnert, University of Stuttgart)

12:45 - 13:30 Lunch Break 13:30 - 14:15 Demonstration 14:15 - 16:15 Broadcast and Mobile Networks

Presentations on Starting Integrated Projects - Maestro (Nicolas Chuberre, Alcatel Space) - Ambient Networks (Ulrich Barth, Alcatel) - E2R (Didier Bourse, Motorola) - Diadalos (Hans J. Einsiedler (T-Systems)

16:15 - 16:25 Break 16:25 – 17:00 The Future of Mobility and IP

Panel discussion

Page 85: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 85 / 86

The number of visitors was around 50 peoples. The call for participation was announced via e-mails, the project’s Web-site, and also via the Commission Web-site. The visitors came from whole Europe.

Figure 60: Demonstration

Figure 61: Foyer - place of demonstration

The summit, the demonstrations, and the audit were very successful. The visitors and the speakers were involved in fruitful discussions and the presentations were from high levels.

8.3 Moby Dick workshop in Singapore For the Moby Dick meeting and workshop in Singapore, held in December 2003, I2R proposed to host a demonstration testbed to showcase technology developed in the Moby Dick project. The testbed was intentionally small, because of both hardware and manpower restrictions, and the intention was to do the demonstration in via two means: a wlan-only registration and handover demonstration, including QoS support; and secondly, a video-recording of tests done in Madrid by UC3M. Results: Some difficulties were faced due to the fact that the exact hardware being used in Mobydick trial sites were un-available in Singapore. Mobydick partners made this hardware available when they arrived in Singapore prior to the meeting. However, that left little time to do final configuration and troubleshooting of the testbed. Unfortunately, the documentation of the Mobydick software was not clear, or incomplete, in some places. This led to the result that a significant part of the demonstration testbed was not configured properly. It is clear that while Mobydick software works, it is not easy to setup, and takes a lot of time to fine-tune, especially when not all the experts of individual technologies are present. From the experience, I2R would recommend that some effort be spent to make all the documentation consistent with each other,

Page 86: Final Test and Evaluation Report - AGH University of ...kt.agh.edu.pl/~pacyna/deliverables/MobyDick/D0504.pdfd0504.doc Version 1.0 15.01.2004 Moby Dick WP5 2 / 86 Executive Summary

d0504.doc Version 1.0 15.01.2004

Moby Dick WP5 86 / 86

and/or updated where necessary (i.e., where the docuemtnation has not kept in synchronization with changes in software). In the end, the live testbed was abandoned and only demonstration of video recordings of tests performed earlier was done. This was however, appreciated by the audience, and clearly the Mobydick demonstration scenarios are shown to be effective at showing the utility of the technology. People coming from South Korea, Japan, Taiwan and other places of Asia and Europe assisted to the workshop.

8.4 Demonstration to high school students at UC3M University Carlos III of Madrid organised some informative public events for high school students in order to show university activities. Two of these events took place during Moby Dick evaluation period, so some of the Moby Dick activities were showed to them as a example of Telematics work. About 40 students in each visit – with no technical background at all – checked Moby Dick infrastructure. The demonstration consisted in:

o In the first visit, an example of Internet radio streaming was shown. A demo of FHO was shown, changing technology of access (inter-technology handover).

o In the second visit (students profile was more telematics related), a example of video streaming seamless handover was shown. In this demo, users were able to check that there is no appreciable interruption during inter-technology handover.

These demos were very useful for Moby Dick user feedback, because users with no technical background nor Moby Dick knowledge were able to evaluate our prototype. They saw how a 4th generation network works, with more than one access technology involved and more valuable services.