mpls training guide book

258
Chapter 1: The Fundamentals of MPLS Networks and Data Flow Introduction In this chapter, we examine the basic components of MPLS networks. We undertake an exploration of data transport, equipment functions, and procedures that help make this emerging protocol an exciting and pivotal force in the world of telecommunications. The lessons in this chapter are fortified with examples, applications, hands-on exercises, and links to valuable MPLS resources. 1

Upload: mcmiljare

Post on 22-Oct-2014

130 views

Category:

Documents


6 download

TRANSCRIPT

Page 1: MPLS Training Guide Book

Chapter 1: The Fundamentals of MPLS Networks and Data Flow

Introduction

In this chapter, we examine the basic components of MPLS networks. We undertake an exploration of data transport, equipment functions, and procedures that help make this emerging protocol an exciting and pivotal force in the world of telecommunications. The lessons in this chapter are fortified with examples, applications, hands-on exercises, and links to valuable MPLS resources.

1

Page 2: MPLS Training Guide Book

What Is MPLS?

What is this new protocol that leading telecommunication experts claim “will take over the world”? You can rest your worried mind; Internet Protocol (IP) and asynchronous transfer mode (ATM) are not on the verge of extinction. In fact, it is my belief that multiprotocol label switching (MPLS) will breathe new life into the marriage of IP and ATM.

The best way to describe the function of MPLS is to draw an analogy to a large national firm with campuses located throughout the United States. Each campus has a central mail-processing point through which mail is sent, both around world and to other campuses. From the start, the mailroom has been under orders to send all intercampus correspondence via standard first-class mail. The cost of this postage is calculated into the company’s operational budget.

However, some departments have been complaining for several months that they require overnight delivery and package-tracking services. As a manager, you establish a system to send three levels of mail between campuses: first-class (normal) mail, priority (important) mail, and express mail (urgent). In order to offset the increased expense of the new services, you bill the departments that use these premium services at the regular rate of postage, plus 10 percent.

In this analogy, units of priority mail and express mail are processed by way of placement into specific envelopes with distinctive labels. These special labels and packets assure both prioritized handling and tracking capability within the postal network. In order to avoid slowdowns and bottlenecks, the postal facilities in the network create a system that uses sorting tables or sorting databases to identify and expedite these packets.

2

Page 3: MPLS Training Guide Book

MPLS Network Construction

In an IP network, you can think of routers as post offices or postal sorting stations. Without a means to mark, classify, and monitor mail, there would be no way to process different classes of mail. In IP networks, you find a similar situation. Figure 1.1 shows a typical IP network with traffic having no specified route.

Figure 1.1: IP Network

In order to designate different classes of service or service priorities, traffic must be marked with special labels as it enters the network. A special router called a label edge router (LER) provides this labeling function (see Figure 1.2). The LER converts both IP packets into MPLS packets and MPLS packets into IP packets. On the ingress side, the LER examines the incoming packet to determine whether the packet should be labeled. A special database in the LER matches the destination address to the label. An MPLS shim header, as shown in Figure 1.2, is attached, and the packet is sent on its way.

Figure 1.2: IP Network with LERs and IP Packet with Shim Header Attached

To further understand the MPLS shim header, let’s look at the Open Systems Interconnection (OSI) model. Figure 1.3a shows OSI Layers 7 through 3 (L7–L3) in dark grey, and Layer 2 (L2) is shown in grey. When an IP packet (Layers 2–7) is presented to the LER, it pushes the shim header (b) between Layers 2 and 3. Note that the shim header, while part of neither Layer 2 nor Layer 3, provides a means by which to relate both Layer 2 and Layer 3 information.

3

Page 4: MPLS Training Guide Book

Figure 1.3: MPLS Shim Header and Format

The shim header (c) consists 32 bits in four parts; 20 bits are used for the label, three bits for experimental functions, one bit for stack function, and eight bits for time to live (TTL). It allows for the marriage of ATM (a Layer 2 protocol) and IP (a Layer 3 protocol).

In order to route traffic across the network once labels have been attached, the non-edge routers serve as label switch routers (LSRs). Note that these devices are still routers. Packet analysis determines whether they serve as MPLS switches or routers.

The function of the LSR is to examine incoming packets. Provided that a label is present, the LSR will look up and follow the label instructions and then forward the packet according to the instructions. In general, the LSR performs a label-swapping function. Figure 1.4 shows LSRs within a network.

Figure 1.4: Label Switch Routers

Paths are established between the LER and the LSR. These paths are called label switch paths (LSPs). The paths are designed for their traffic characteristics; as such, they are very similar to ATM path engineering. The traffic-handling capability of each path is calculated. These characteristics can include peak-traffic load, interpacket variation, and dropped-packet percentage calculation.

Figure 1.5 shows the LSP established between MPLS-aware devices. Because MPLS works as an overlay protocol to IP, the two protocols can co-exist in the same cloud without interference.

4

Page 5: MPLS Training Guide Book

Figure 1.5: Label Switch Paths

5

Page 6: MPLS Training Guide Book

Exercise 1.1: LER and Granularity

In an MPLS network, the LERs serve as quality of service (QoS) decision points. One method to establish these policies is to use the port numbers in Layer 4 of a packet The tradeoffs in establishing these policies come from how much granularity is needed versus how manageable the configurations and tables are.

In the first example, we have created an MPLS LER table with three criteria: rules on IP address only, IP and protocol number, and IP protocol and port number.

Additionally, we have established routing paths A–Z, and we call them forward equivalence classes, or FECs. The FEC A paths are the highest-quality paths, and the FEC Z paths are the lowest-quality paths.

The policies use the port numbers to place traffic on particular paths. Port numbers are:

20/21 FTP, 25 E-Mail, 80 HTTP, 443 HTTPS, 520 Routing

1.  Examine the table and determine the category (IP, IP-protocol, IP-protocol and port) with the most entries.

2.  In Table 1.1, using the IP, protocol, and port number sections, how would HTTPS be handled in relationship to HTTP?Table 1.1: MPLS LER Table

Sort and Classify by

Source IP Target IP

DiffSer

Protocol # (Hex)

Port #

Label Out

Port Out

Inst Fec

IP Only 192.168.10.0-255

40.5.0.0-255

All All All 200 A Push

Z

IP, Protocol

192.168.10.0-255

40.5.0.0-255

None 6 All 10 A Push

A

IP, Protocol

192.168.10.0-255

40.5.0.0-255

None 11 All 20 A Push

B

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 1 All 30 A Push

C

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 6 20 10 A Push

A

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 6 21 10 A Push

A

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 6 80 20 C Push

B

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 6 443 10 A Push

A

IP, Protocol

192.168.10.0-255

40.5.0.0-255

None 6 25 10 A Push

A

6

Page 7: MPLS Training Guide Book

Table 1.1: MPLS LER Table

Sort and Classify by

Source IP Target IP

DiffSer

Protocol # (Hex)

Port #

Label Out

Port Out

Inst Fec

, port

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 11 53 30 C Push

A

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 11 69 200 A Push

A

IP, Protocol, port

192.168.10.0-255

40.5.0.0-255

None 11 520 200 C Push

X

3.  Describe a circumstance in which HTTPS should be handled differently from HTTP.

4.  What FEC classification is given to routing?

5.  How could giving the above classification to routing become a problem?

Answers

1.  The table with the most entries is the table that sorts by IP address, protocol number, and port number.

2.  HTTPS uses FEC A, whereas HTTP uses FEC B. Since HTTPS could produce revenue and is secure it has a higher priority.

3.  HTTPS is given a higher priority because it offers the opportunity for revenue.

4.  Routing is classified as FEC Z (which is the lowest FEC rating).

5.  Routing and label distribution should be given the highest priority in the network; otherwise, packets could be misrouted.

7

Page 8: MPLS Training Guide Book

Exercise 1.1 Summary

In this exercise, we saw the manner in which granularity of services affects the length of a switching table. The more decision points, or the more granular the decision points, the longer the switching tables and the more complex that switching becomes.

There are several key components to the construction of an MPLS network. The LER adds and/or removes (“pops” or “pushes”) labels. The LSR examines packets, swaps labels, and forwards packets. Finally, the LSPs are the preassigned, preengineered paths that MPLS packets could take.

At this point, you might be asking whether the advantages of MPLS are worth the extra effort needed to understanding its workings. Consider the following for yourself:

Your company uses a database application that is intolerant of packet loss or jitter. In order to ensure that your prime traffic will get through, you have secured a high-cost circuit, and you have overprovisioned that circuit by 60 percent. In other words, you are sending all of your mail as “express mail”—for $13.50 per packet!

With MPLS, you can have the LER sort your packets and place only your highest-priority traffic on the most expensive circuits while allowing your routine traffic to take other paths. You have the ability to classify traffic in MPLS terms, and your LER sorts traffic into FECs. Figure 1.6 shows the network now broken down into FECs.

Figure 1.6: MPLS Network with Two FECs

8

Page 9: MPLS Training Guide Book

Data Flow in MPLS Networks

The simplest form of data “flow” occurs when IP packets are presented to the ingress router, which is acting as the LER (see Figure 1.7).

Figure 1.7: Ingress LER Attaches a Shim Header

Much like the sorting room at your postal service’s branch location that classifies mail into service grades of first-class, priority, or express, the LER classifies incoming IP traffic, relating it to the appropriate label. As we’ve seen, in MPLS this classification process is called forward equivalence class (FEC).

LERs use several different modes to label traffic. In the simplest example, the IP packets are “nailed up” to both a label and an FEC using preprogrammed tables, such as the example shown in Table 1.2.Table 1.2: LER Instruction Set

Destination/IP Port Number FEC Next Hop Label Instruction

199.50.5.1 80 B 47.5.10.100 80 Push

199.50.5.1 443 A 120.8.4.100 17 Push

199.50.5.1 25 IP 100.5.1.100   (Do nothing; native IP)

When the MPLS packets leave the LER, they are destined for the LSR, where they are examined for the presence of labels. The LSR looks to its forwarding table—called a label information base (LIB) or connectivity table—for instructions. The LSR will swap labels according to LIB instructions. Table 1.3 shows an example of a LIB.Table 1.3: Label Switch Router’s Label Information Base (LIB)

Label/In Port In Label/Out Port/Out FEC Instruction Next Hop

80 B 40 B B Swap

17 A 18 C A Swap

Figure 1.8 demonstrates the LSR performing its label-swapping functions.

9

Page 10: MPLS Training Guide Book

Figure 1.8: Label Swapping

At the egress of the network, the LER removes the MPLS header and forwards the packet to an IP network. Label swapping greatly simplifies MPLS packet flow.

The LER performs many packet-analysis functions: mapping Layer 2 to MPLS, mapping MPLS to Layer 3, and classifying traffic with great granularity. In addition, the LER decides which packets of the traffic become MPLS packets.

One decision-making method is called triggered mode. Using this method, a router will determine that there is a “traffic stream” when a predetermined number of packets are addressed to a single location and are scheduled to arrive within a specified timeframe. Once the router has made this determination, it will then reroute the stream of traffic for MPLS processing.

Even further enhancements and flexibility are available to MPLS using the label-stacking method, as shown in Figure 1.9.

Figure 1.9: Stacked Labels with Tunneled Network

Consider the following scenario. You own Network 1; however, your traffic must proceed across Network 2, a network that is not owned by your company. You must ensure that Network 2 handles your traffic according to your service-level agreement (SLA), but Network 2’s owners are not using the same label criteria as your company.

10

Page 11: MPLS Training Guide Book

In this case, you would stack labels and build a tunnel across Network 2. This configuration would preserve the integrity of your network’s labels while allowing the other network to operate independently.

11

Page 12: MPLS Training Guide Book

Practical Applications

Now that you have seen how data “flows” in an MPLS network, it is time to look at some practical implementations of MPLS and some of the commands that could be useful to you. Of course, different vendors may use different commands, but this section provides some examples.

Label Numbers

The first part of these applications relates to label numbers and how they are used or reserved. The MPLS standard reserves labels 0–15 for defined uses. This leaves labels 16–1,048,575 open for use.

Manufacturers differ on how these labels are assigned. For example, one vendor (Juniper) uses labels 16–1023 for manual LDP connections and configuration, while labels 1024–99,999 are stored for future use. That leaves labels 100,000–1,048,575, which can be assigned by the system automatically.

All manufacturers reserve labels 0–15, but they divide their labels differently. This does not affect interoperability, because labels are negotiated when an LDP is established. If a label is requested, then it cannot be used until another label is assigned.

MPLS Commands

With other routers (such as Cisco), you can assign a label range with a simple command figure, as shown in Figure 1.10.

Figure 1.10: MPLS Label Range Commands

The next useful practical command involves seeing the forwarding tables. Cisco’s example is shown in Figure 1.11.

Figure 1.11: MPLS Forwarding Table Commands

12

Page 13: MPLS Training Guide Book

Exercise 1.2: MPLS Data Flow

We find in an MPLS network that data moves from switch to switch using link-specific labels. Switches perform functions based on their switching or cross-connect tables.

These tables contain information such as port in, label in, port out, label out, next router, and instructions. The instructions are simple: “push” (insert a label), “swap” (change labels), and “pop” (remove label).

In this exercise, sample tracing of a packet through an MPLS network, five routers R1–R5 connect networks X and Z. Tables 1.4–1.8 are used to discover the LSPs. Table 1.4 is used for Router 1, Table 1.5 is used for Router 2, Table 1.6 is used for Router 3, Table 1.7 is used for Router 4, and Table 1.8 is used for Router 5. Each table is different and represents the MPLS routers internal switching table.

In Figure 1.12, we have an example of how data would move in this situation.

In Table 1.4, the packet (being HTTP port 80) enters as native IP/80 where a label (20) is pushed and the packet is sent out of port D. Notice that as the packet traverses the network, it exits Router 1 at port D and enters Router 3 at port B.

In Table 1.6, the label (20) is swapped for label 600, and the packet exits the router at port D, where it is hardwired to port B of R5.

In Table 1.8 (R5), the packet label 600 is popped to deliver a native packet to network Z.

Note that Figure 1.11 reflects the correct labels.

In this exercise, use the switching tables for Routers 1 through 5 and Figures 1.12 and 1.13 to map data flow and labeling across the network. Of course, the tables contain data that is not used for your packet, but they also contain switching data needed for other packets. Use only the data that you need to move your packets. Follow these instructions:

1. Always start with Table 1.4 and follow applications that enter through Interface A.Table 1.4: Switching Table for Router 1

P_In Label In Label Out Port Out Instruction Next Router

IP/80 None 20 D Push R3

IP/25 None 95 B Push R4

IP/20 None 500 C Push R2

2. The decision made by Table 1.4 will lead you to another switching table, depending on the application, port out, and the router out.

3. In Figure 1.12, note that the packet label numbers appear on the drawings. Use Figures 1.13 and 1.14 to indicate the correct label number.

13

Page 14: MPLS Training Guide Book

Figure 1.12: Network Trace for HTTP Port Number 80

4. Use Figure 1.13 and Tables 1.4–1.8 to trace e-mail (port 25) through the network, and note the trace on the drawing.

Figure 1.13: Network Trace for Port 25 E-Mail

Table 1.5: Switching Table for Router 2

P_In Label In Label Out Port Out Instruction Next Router

B 499 700 D Swap R5

B 500 65 C Swap R3

B 501 700 A Swap R9

Table 1.6: Switching Table for Router 3

P_In Label In Label Out Port Out Instruction Next Router

B 20 600 D Swap R5

14

Page 15: MPLS Training Guide Book

Table 1.6: Switching Table for Router 3

P_In Label In Label Out Port Out Instruction Next Router

A 65 650 D Swap R5

B 501 700 A Swap R9

5. Using Figure 1.14 and Tables 1.4–1.8 to trace FTP (port 20) through the network, and note the trace on the drawing.

Figure 1.14: Network Trace for Port 20 FTP

Table 1.7: Switching Table for Router 4

P_In Label In Label Out Port Out Instruction Next Router

B 95 710 D Push R5

A 500 650 D Push R5

B 515 700 D Push R5

Table 1.8: Switching Table for Router 5

P_In Label In Label Out Port Out Instruction Next Router

A 500 None D Pop CR

B 600 None D Pop CR

B 650 None D Pop CR

C 710 None D Pop CR

15

Page 16: MPLS Training Guide Book

Exercise 1.3: Single Stacked Label Decode

There are several ways to complete this lab. The exercise itself is written in standalone form so that you do not need any products to complete the exercises. Just skip the hands-on block that follows.

16

Page 17: MPLS Training Guide Book

Hands-On: Compare and Contrast IP/Ethernet and IP/MPLS/Ethernet

If this is the only protocol analyzer present on your computer, you can open the file called MPLS_basic by clicking it. If you have another protocol analyzer, you have to open the Ethereal program and open the file from the menu.

1. From your desktop, go to Start | Programs; find and double-click Ethereal. 2. Once the Ethereal program opens, open the file called MPLS_basic.cap. 3. Wait for the file to open. It will take a few minutes.4. Find the frames that have 8847 in the protocol field (for example, Frame 9).5. Follow the steps in the following exercise.

In protocol analyzers, we count bytes from left to right, starting at 0. So, if the first byte is said to have a value at offset of 0, the second byte is an offset of 1. In Figure 1.15, we see a standard IP-over-Ethernet packet.

1.  Look at Frame 1 in Figure 1.15. What is the value at offset 12 and 13?

Figure 1.15: Frame 1

2.  Look at Frame 1 in Figure 1.15. What is the value at offset 14 and 15?

3.  Look at Frame 9 in Figure 1.16. What is the value at offset 12 and 13? Why is this value different? What does it mean?

Figure 1.16: Frame 9

17

Page 18: MPLS Training Guide Book

4.  Look at Frame 9 in Figure 1.16. What is the value at offset 14, 15, 16, 17?

Translate the hex number into binary using the following chart.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1

5.  Determine the values for the following:a. The label __________b. The experimental bits __________c. The stack bit __________d. The TTL value __________

6.  Look at offsets 18 and 19. What are their values?

7.  Compare the values in Questions 2 and 5. What do you find interesting about them?

Answers

1.  The value at offset 12 and 13 is 0800 (the next header is IP).

2.  45 CO (IP Version 4 with a 20 byte header and class of service)

3.  8847  (A shim header next).

In Figure 1.15 frame 1, the note indicates that an IP header is next. In Figure 1.16, the note indicates that a shim header MPLS is next.

It means that the frame has been modified to accommodate MPLS.

4.  00  1  1 f

Translate the hex number into binary using the chart below.

Label E S TTL

0001D 0 1 FF

5. a. 29   b. 0   c. 1   d. 255

6.  45 00

7.  MPLS was inserted and moved the start of the IP header by 32 bits.

Exercise 1.3 Summary

18

Page 19: MPLS Training Guide Book

In this lab, we have seen how an IP packet and an MPLS packet compare to one another, and we have seen an MPLS header in detail. To go further, you may even want to decode your own packets.

Exercise 1.4: Stacked Decode

19

Page 20: MPLS Training Guide Book

In this exercise, you will decode and study an MPLS packet used in a tunneling situation where labels are stacked.

There are several ways to complete this exercise. The exercise itself is written in standalone form so that you do not need any products to complete the exercises. Just skip the hands-on block.

Hands-On: Open the File and Review File Content

20

Page 21: MPLS Training Guide Book

If you are the “hands-on” type and you want to see MPLS packets on a protocol analyzer, you need the two items of software (Ethereal and the MPLS-basic-cap sample) mentioned in the previous hands-on exercise.

1. From your desktop, go to Start | Programs and click Ethereal. 2. Once Ethereal opens, open the file called MPLS1.cap.3. Wait for the file to open. It will take a few minutes.

The file should look like Figure 1.17. Now let’s review the file content in the following steps.

Figure 1.17: Open MPLS_basic File

1.  Look at Frame 9, as shown in Figure 1.17. Note the values found at offsets 14 to 21. Record them in hex here:

_____ _____ _____ _____ _____ _____ _____ _____

14 15 16 17 18 19 20 21

2.  Using the following chart, translate the hex number into binary for Label 1 found at offsets 14–17.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

3.  What are the values of each of the following for Label 1?a. The label __________b. The experimental bits __________c. The stack bit __________d. The TTL value __________

4.  Using the following chart, translate the hex number into binary for Label 2 found at offsets 18–21.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

5.  What are the values of each of the following for Label 2?a. The label __________b. The experimental bits __________c. The stack bit __________d. The TTL value __________

6.  Is the stack bit set for Label 1 (offset 14–17)? __________

21

Page 22: MPLS Training Guide Book

7.  Is the stack bit set for Label 2 (offset 18–21)? __________

8.  Explain why the stack bit may be set differently. __________

Answers

1.  00   01   20   ff   00   01   01   ff

14 15 16 17 18 19 20 21

2. 

3. a. 18   b. 0   c. 0   d. 255

4.  Label E S TTL

5. a. 16   b. 0   c. 1   d. 255

6.  OFF

7.  ON

8.  The stack bit is turned on to indicate that this is the last header in the stack (or the header closest to the IP header).

Checkpoint  Match the lettered item with its appropriate numbered description.

1. ____ is the path A. LER

2. ____ pushes, pops labels B. FEC

3. ____ swaps labels C. LSP

4. ____ traffic class D. LSR

Answers: 1. C; 2. A; 3. D; 4. B.

Chapter Summary and Review

The concept of processing transmitted communication by label is not new; it has been implemented successfully for the U.S. Postal Service, Federal Express, and many other package-handling systems. In networking, this

22

Page 23: MPLS Training Guide Book

process has been used in Frame Relay and ATM. What is new is that the ubiquitous and uncontrolled Internet Protocol (IP) is now operating under a new set of rules where it can be classified, managed, and policed across any type of network.

The nice feature of MPLS is that setting up for it does not involve a fork-like modification of networking hardware. In some cases, only software-based modifications to existing IP routers are required to accommodate MPLS. For a fraction of the expense incurred in installing a dedicated network, MPLS allows IP traffic to be classified, marked, and policed—while consistently providing a method by which Layers 2 and 3 can exchange data.

MPLS does not replace IP; rather, it supplements IP, so that traffic can be marked, classified, and policed. With the use of MPLS, end-to-end quality of service can finally be achieved.

MPLS highlights include the following: MPLS allows for the marriage of IP to Layer 2 technologies (such as ATM) by overlaying a protocol on top

of IP networks. Network routers that are equipped with special MPLS software work by processing MPLS labels

contained within the shim header. Raw IP traffic is presented to the LER, where labels are pushed; these packets are forwarded over the

LSP to the LSR, where labels are swapped. At the egress to the network, the LER removes the MPLS labels and marks the IP packets for delivery. If traffic crosses several networks, it can be tunneled across the networks by use of stacked labels.

Knowledge Review 

Answer the following true/false questions.1. MPLS is like ATM, the LSPs are like ATM, and a VCI with a label is like a

VPI.2. MPLS allows engineering of IP traffic. 3. MPLS labels can be stacked. 4. MPLS LSRs pop labels.5. In MPLS, packets are assigned labels like ZIP codes, and they remain the

same throughout the network.

Answers: 1: true; 2: true; 3: true; 4: false; 5: false.

Going Further

There is an excellent introduction to MPLS sponsored and created by Nortel Networks (see www.nortelnetworks.com/corporate/technology/mpls/tooldemo.html). Other MPLS resource sites:

23

Page 24: MPLS Training Guide Book

MPLS Resource Center: http://mplsrc.com/ George Mason University Lab:

www.gmu.edu/departments/ail/resources.htm For information on other MPLS sites, see the MPLS Links Page:

www.rickgallaher.com/mplslinks.htm.

Endnotes

1. A shim header is a special header placed between Layer 2 and Layer 3 of the OSI model. The shim header contains the label used to forward the MPLS packets.

Chapter 2: MPLS Label Distribution

Introduction

24

Page 25: MPLS Training Guide Book

In Chapter 1, we discussed both data flow and foundational concepts of MPLS networks. In this chapter, we introduce the concepts and applications of MPLS label distribution, and we take a good look at MPLS signaling. You will also have the opportunity to exercise and expand your working knowledge with both hands-on exercises and vendor examples.

The Early Days of Switching

Circuit switching by label is not a new practice. A quick review of telephony shows us how signaling was done in the “old days.” In the early days of telephone systems, telephone switchboard had patch cables and jacks; each

25

Page 26: MPLS Training Guide Book

jack was numbered to identify its location. When a call came in, an operator would plug a patch cord into the properly numbered jack. This is a relatively simple concept.

Recalling those days, we find that, although the process seemed simple enough, it was really hard work (see Figure 2.1). Telephone operators would attend school for weeks and go through an apprenticeship period before qualifying to operate a switchboard, because the rules for connecting, disconnecting, and prioritizing calls were complex and varied from company to company.

Figure 2.1: Label Switching in the Early Days

Here are some rules of switching: Never disconnect the red jacks; these are permanent connections. Connect only the company executives to the jacks labeled for long distance. Never connect an executive to a noisy circuit. If there are not enough jacks when an executive needs to make a call, disconnect the lower-priority calls. When the secretary for “Mr. Big” calls up at 9:00 a.m. to reserve a circuit for a 10:00 a.m.–noon time slot,

make sure that the circuit is ready and that you’ve placed the call by 9:50 a.m. In an emergency, all circuits can be controlled by the fire department.

Essentially, one operator had to know permanent circuits (red jacks), switched circuits, prioritization schemes, and reservation protocols. When automatic switching came along, the same data and decision-making processes had to be loaded into a software program.

MPLS Label Distribution

MPLS switches, like the switchboard operators of old, must be trained; they must learn all the rules and all the circumstances under which to apply those rules. Two methods are used to make switches that are “trained” for these purposes.

26

Page 27: MPLS Training Guide Book

One method uses hard programming and is similar to how a router is programmed for static routing. Static programming eliminates the ability to dynamically reroute or manage traffic.

Modern networks change on a dynamic basis. To accommodate the adjusted needs of these networks, many network engineers have chosen to use the second method of programming MPLS switches: dynamic signaling and label distribution. Dynamic label distribution and signaling can use one of several protocols.

Each protocol has its advantages and disadvantages. Because this is an emerging technology, we have not seen the dust fully settle on the most dominant labeling and signaling protocols. Yet, despite the selection of protocols and their tradeoffs, the basic concepts of label distribution and signaling remain consistent across the protocols.

At a minimum, MPLS switches must learn how to process packets with incoming labels. This process is accomplished through the use of a cross-connect table.

Here is an example of a cross-connect function: Label 101 entering at Port A will exit via Port B with a label swapped for 175. The major advantage of using cross-connect tables instead of routing is that cross-connect tables can be processed at the “data link” layer, where processing is considerably faster than routing.

We start our discussion using a simple network (see Figure 2.2) with four routers. Each router has designated ports. For the sake of illustration, each port has been given a simple letter (a, b, s, h, a, and e). These port identifications are router specific. The data flows from the input a of R1 to the input of R4.

Figure 2.2: Basic MPLS Network with Four Routers

The basic network diagram shown in Figure 2.2 will be enhanced as we progress through MPLS signaling.

Control of Label Distribution

Two modes are used to load cross-connect tables: independent control and ordered control.

Independent Control

Each router could listen to routing tables, make its own cross-connect tables, and inform others of its information. These routers would be operating independently.

Independent control is a term given to a situation in which there is no designated label manager and when every router has the ability to listen to routing protocols, generate cross-connect tables, and distribute them freely (see Figure 2.3).

27

Page 28: MPLS Training Guide Book

Figure 2.3: Independent Control

Ordered Control

The other model of loading tables is ordered control, as shown in Figure 2.4. In the ordered control mode, one router—typically the egress LER—is responsible for distributing labels.

Figure 2.4: Ordered Control (Pushed)

Each of the two models has its tradeoffs. Independent control provides for faster network convergence. Any router that hears of a routing change can relay that information to all other routers. The disadvantage is that there is no single point of control that is generating traffic, which makes engineering more difficult.

Ordered control has the advantages of better traffic engineering and tighter network control; however, its disadvantages are that convergence time is slower and the label controller is the single point of failure.

Label Distribution Triggering

Within ordered control, two major methods are used to trigger the distribution of labels. These are called downstream unsolicited (DOU) and downstream on demand (DOD).

28

Page 29: MPLS Training Guide Book

DOU

In Figure 2.4, we saw the labels “pushed” to the downstream routers. This push is based on the decisions of the router that has been designated as label manager. When labels are sent out unsolicited by the label manager, it is known as downstream unsolicited, or DOU.

Consider these examples: The label manager may use trigger points (such as time intervals) to send out labels or label-refresh messages every 45 seconds. Or a label manager may use the change of standard routing tables as a trigger; when a router changes, the label manager may send out label updates to all affected routers.

DOD

When labels are requested, they are “pulled” down, or demanded, so this method has been called pulled or downstream on demand, or DOD. Note in Figure 2.5 that labels are requested in the first step, and they are sent in the second step.

Figure 2.5: Downstream on Demand (DOD)

Whether the labels arrive via independent control or ordered control, via DOD or DOU, the LSR creates a cross-connect table like the one shown in Figure 2.6.

Figure 2.6: LSR with Cross-Connect Tables Populated

29

Page 30: MPLS Training Guide Book

The connect tables are sent from R3 to R1. The table headings read label-in, port-in, label-out, port-out, and instruction (I). In this case, the instruction is to swap (S). It is important to note that the labels and cross-connect tables are router specific.

After the cross-connect tables are loaded, the data can flow from Router 1 to Router 4, with each router following specific instructions to swap the labels.

After the cross-connect tables are loaded, the data can follow a designated LSP and flow from Router 1 to Router 4, as shown in Figure 2.7.

Figure 2.7: Data Flow on LSP Checkpoint  Answer the following questions:

1. When a label is requested in DOD, it is said to be______________.2. In ordered control, how many routers are responsible for label distribution? 3. Between independent control and ordered control, which provides for faster

network convergence time? 4. True or false: Cross-connect tables are made regardless of how labels arrive.

Answers: 1. pulled; 2. one; 3. independent control; 4. true.

Brief Review

To begin a review, we know that routers need cross-connect tables in order to make switching decisions. Routers can receive these tables either from their neighbors (via independent control) or from a label manager (via ordered control).

A label manager can send labels on demand downstream on demand, or DOD), or it can send labels when it decides to do so, even though the downstream routers have made no label requests, by using downstream unsolicited, or DOU.

With these basic concepts understood, there are some more advanced concepts to consider, such as these: Just how are labels sent to routers? What vehicle is used to carry these labels? How is the QoS information relayed or sent to the routers?

To review a bit, it is understood that MPLS packets carry labels; however, they do not contain any areas that tell routers how to process the packets for QoS.

Recalling that traffic can be separated into groups called forward equivalence classes (FECs) and that FECs can be assigned to label switch paths (LSPs), we can perform traffic engineering that will force high-priority FECs on

30

Page 31: MPLS Training Guide Book

to high-quality LSPs and lower-priority FECs on to lower-quality LSPs. The mapping of traffic with use of different QoS standards will lead the distribution of labels and maps to become a more complex process.

Figure 2.8 shows a drawing of what goes on inside an LSR. There are two planes: the data plane and the control plane. Labeled packets enter at input a with a label of 1450, and they exit port b with a label of 1006. This function takes place in the cross-connect table. This table can also be called the next-hop label forwarding entry (NHLFE) table.

Figure 2.8: A Closer Look at the Router

This table is not a standalone database. It connects to two additional databases in the control plane: the FEC database and the FEC-to-NHLFE database. The FEC database contains, at a minimum, the destination IP address, but it can also contain traffic characteristics and packet-processing requirements. Data in this database must be related to a label; the process of relating an FEC to a label is called binding.

Tables 2.1–2.4 constitute an example of how labels and FECs are designed to work together. We see that packets with labels can be quickly processed when entering the data plane, provided that the labels are bound to an FEC. However, a lot of background processing must take place offline with data traffic before a cross-connect table can be established.Table 2.1: FEC Database

FEC 192.168.10.1 Protocol 06 Port 443 Guaranteed no packet loss

FEC 192.168.10.2 Protocol 11 Port 69 Best effort

FEC 192.168.10.3 Protocol 06 Port 80 Controlled load

Table 2.2: Free Label

100–10,000 are not in use at this time.

Table 2.3: FEC to NHLFE

FEC Label In Label Out

192.168.10.1 1400 100

192.168.10.2 500 101

192.168.10.3 107 103

31

Page 32: MPLS Training Guide Book

Table 2.4: NHLFE

Label In Label Out

1400 100

500 101

107 103

Protocols

Finding a transport vehicle with which to build these complex tables is of the utmost concern to network designers. What is needed is a protocol that can carry all the necessary data while being fast, being self-healing, and maintaining excellent reliability.

Label Distribution Protocol, or LDP, was created by design engineers and the MPLS workgroup as a means of addressing such transport needs. This protocol works much like a telephone call: When labels are bound, they remain bound until a command appears to tear down the call. This hard-state operation is less “chatty” than a protocol that requires refreshing. The LDP protocols provide implicit routing.

Other groups argue against using a new, untested label distribution protocol when there exist routing protocols that can be modified or adapted to carry the bindings. Thus, some existing routing protocols have been modified to carry information for labels. Border Gateway Protocol (BGP) and Intermediate System-to-Intermediate System (IS-IS) work well for distributing label information along with routing information.

32

Page 33: MPLS Training Guide Book

The LDP, BGP, and IS-IS protocols establish the Label Switch Path (LSP but do little in the service of traffic engineering, because routed traffic can potentially be redirected onto a high-priority LSP, thereby causing congestion.

To overcome this problem, signaling protocols were established to create traffic tunnels (explicit routing) and allow for better traffic engineering. These protocols are Constraint Route Label Distribution Protocol (CR-LDP) and Resource Reservation Setup Protocol (RSVP-TE). In addition, the Open Shortest Path First (OSPF) routing protocol has undergone modifications to handle traffic engineering (OSPF-TE); however, it is not widely used as of this writing.Table 2.5:

Protocol Routing Traffic Engineering

LDP Implicit No

BGP Implicit No

IS-IS Implicit No

CR-LDP Explicit Yes

RSVP-TE Explicit Yes

OSPF-TE Explicit Yes

Checkpoint  Choose one of the three terms in parentheses to answer each of the following questions. 1. Traffic tunnels provide for (implicit, explicit, signal-based) routing. 2. The process of relating a label to an (FEC, OSPF, NHLFE) is known as binding. 3. (BGP, IS-IS, OSPF) does not distribute label information with routing information. 4. NHLFE is a (protocol, standard, table) that works within an LSR.

Answers: 1. explicit; 2. FEC; 3. OSPF; 4. table.

Practical Applications: Label Distribution

Hundreds of pages worth of forum comments have been written about label distribution methods, including OSPF-TE and LDP.

The LDP protocol is standardized as detailed in RFC 3036. To obtain detailed vendor explanations and commands, contact Cisco, Juniper, and Riverstone.

In this section, we take a look at how to establish the LDP protocol on a Riverstone router and how to show LDP status. For other vendors, see the related links at the end of the chapter.

Note  With some vendors, RSVP and LDP protocols may not be enabled on the same interfaces.

33

Page 34: MPLS Training Guide Book

Configuration Steps

The configuration of LDP will vary in accordance with how a router is configured. If interfaces have already been created with routing protocols, and MPLS is running, the next step is configuring the LDP protocol. The process of doing so is explained in this section.

If you must configure LDP on an interface that is yet to be created, four basic steps are involved. They are outlined here and described in the following sections:

1. Create and enable the interface.2. Create OSPF on the interfaces.3. Create and enable MPLS on the new interfaces.4. Create and enable LDP.

Enabling the Label Distribution Protocol

LDP, works very differently from RSVP-TE. In the case of LDP, simply enabling the protocol on the required interfaces will allow the routers to discover directly connected label distribution peers via multicast UDP packets and, subsequently, to establish a peering relationship over TCP.

Each router will create and distribute a label binding to an FEC for each loopback interface that is defined in the router. Each physical interface that is expected to interpret and function in an LDP environment must be added to it. To enable LDP capabilities on an interface, simply add it to the LDP process.

rs(config)# ldp add interface <name|all>

To start the LDP process on the router:

rs(config)# ldp start

The loopback interface is added automatically when all interfaces are added to LDP as a group. This interface is required to establish remote LDP peering sessions, and if the all option is not used to add the interfaces to LDP, the lo0 interface must be explicitly added.[1]

Configuring New Interfaces with Show Commands

We use Figure 2.9 to show the configuration of LDP from LER1 (far left) to LSR 1 and LSR 2. The four basic steps that were detailed above are explained along with the related show commands.

Figure 2.9: Full Network Diagram

34

Page 35: MPLS Training Guide Book

IGP, MPLS, and LDP are only enabled on the core-facing interfaces. This network and the associated configuration form the basis for show commands that follow.

Create and Enable the Interface

The interface is created by following the commands shown in Figures 2.10–2.12. In Figure 2.10, the interface with the IP address 192.168.1.2/30 is created.

Figure 2.10: Creating the Interface

Figure 2.11 shows the creation of the second interface on router LER1 for the 192.168.1.6/30 address. In Figure 2.12, the interface LoO with an address of 2.2.2.1 is created.

Figure 2.11: Detailed View of LER1 Interface gi 2.2

35

Page 37: MPLS Training Guide Book

Figure 2.14: LER1 Enable OSPF

Create and Enable MPLS on New Interface

Figure 2.15 shows how MPLS is added to the interfaces LSR1 and LSR2. After MPLS is added, it must be started by a command line.

Figure 2.15: MPLS Added to Interface

Create and Enable LDP

After MPLS is added, LDP must be added to each MPLS interface. Figure 2.16 shows how LDP is added to the interfaces and then started.

37

Page 38: MPLS Training Guide Book

Figure 2.16: LDP Started

LDP Session Information

High-level session information for each peer is shown in Table 2.6.Table 2.6: Possible Session States

Possible Session States Description

Nonexistent No session exists

Connecting TCP connection is in progress

Initialized TCP connection established

OpenSent Initialization or keepalive messages being transmitted

OpenRec Keepalive message being transmitted

Operational Session established

Closing Closing session

RS# ldp show sessions all Address of peer State of the session TCP connection state (closed, opening, open) Time before session expires without a keepalive Number of labels sent and received and received labels filtered

The LDP show session command is very powerful with several command extensions. Figure 2.17 is a graphic of the network represented by the show LDP session command.

38

Page 39: MPLS Training Guide Book

Figure 2.17: Show LDP Sessions

Other Show Commands LDP show sessions all verbose

A more detailed view of the session information is available by coding the verbose option. The following additional information is available with this option: Session ID (comprising both LDP identifiers) Timer values Local and remote physical interfaces A list of LDP-enabled interfaces on the remote peer

LDP Neighbor Information

Detailed neighbor information for each peer:

RS# ldp show neighbor verbose Address where neighbor was discovered and interface used to reach neighbor Label space ID indicating the LDP identifier and label space the label was issued from, :0 being from the

global space Time before session expires without a keepalive Transport address used for establishing the session

Figure 2.18 is a graphic representation of the text command show LDP neighbor.

39

Page 40: MPLS Training Guide Book

Figure 2.18: Show LDP Neighbor

LDP Statistical Information

Statistical information about the LDP protocol is broken into two horizontal planes, each with a cumulative and a 5-second representation. The tables are self-explanatory. One thing to note: If the statistics are cleared, all the cumulative information is lost, obviously. So when you’re reviewing the statistics following a clear, the “Event Type - Sessions Opened” field may be zero even though there are open sessions. Don’t let this field mislead you into thinking no sessions are formed. The session display command is the authority on session-related information.

RS# ldp show statistics

LDP Interface Information

A detailed view of the LDP interfaces indicates the following for each LDP enabled interface:

RS# ldp show interface all verbose Label space indicating the LDP identifier and label space the label was issued from, :0 being from the

global space The number of neighbor sessions that exist on this interface Timer information Label management: retention and distribution

[1]Provided by Riverstone.

Exercise 2.1: Control—Ordered or Not?

In this case study, you will choose the better of two solutions for your network and justify your choice.

40

Page 41: MPLS Training Guide Book

1.  Consider the following scenario. Your team is a group of highly paid consultants for a small, developing country. MPLS technology has been selected because of its traffic-engineering capabilities. This system will require traffic engineering to help manage increases in traffic as the network grows.

On the Web, research the benefits and drawbacks of ordered control vs. non-ordered control; then recommend a solution for the LDP protocol. What are your recommendations?

2.  Now do the following.a. List your references.b. List the advantages of ordered control.c. List the disadvantages of ordered control.d. List the advantages of non-ordered control.e. List the disadvantages of non-ordered control.

Answers

1.  This group exercise and case study were designed for real-time study. The information listed below may change over time, so it is recommended that this research be conducted by each student of this course

Ordered control vs. independent control was not an issue at the time — just as cut-through switches vs. store and forward switches were not issues. The technology has advanced significantly since the start of MPLS. Currently several manufacturers offer independent control for start-up and ordered control after the network links are established

2. a. http://cell-relay.indiana.edu/mhonarc/mpls/2000-Jan/msg00144.html

www.cis.ohio-state.edu/~jain/talks/ftp/mpls_te/sld018.htm

http://rfc-3353.rfc-index.net/rfc-3353-23.htm

https://dooka.canet4.net/c3_irr/mpls/sld034.htm

http://course.ie.cuhk.edu.hk/~ine3010/lectures/3010_7.ppt b. Traffic engineering c. Speed set-up speed d. Setup speed e. Loss of traffic engineering

Exercise 2.2: Label Distribution

41

Page 42: MPLS Training Guide Book

In this exercise, you will work with RFC 3036 and translate LDP messages. Use RFC 3036 to find the correct answers.

1.  Match the correct message type number in hex to the correct title of the message by recording the correct message number in the space provided. Message type numbers available for selection are:

0001, 0100, 0200, 0201, 0300, 0301, 0400, 0401, 0402, 0403, 0404 ________ Address Message ________ Address Withdraw ________ Hello ________ Initialization ________ Keep Alive ________ Label Abort Request ________ Label Release ________ Label Request ________ Label Withdraw Message ________ Labels (Series) ________ Notification

2.  Type length values (TLVs) are a subset of LDP messages. Match the correct TLV number in hex to the correct title of by recording the correct message number in the space provided. TLV numbers available for selection are:

0101, 0103, 0104, 0201, 0202, 0300, 0400 ______ ADDRESS LIST ______ ATM ______ FRAME RELAY ______ HOP COUNT ______ KEEP ALIVE ______ PATH LINK ______ STATUS

3.  In the hello message in Figure 2.19, fill in the message type number and the TLV number.

Figure 2.19: Hello Message for Exercise 2.2

Answers

1.  See Section 3.7 of RFC 3036.

0300  Address Message

0301  Address Withdraw

0100  Hello

0200  Initialization

42

Page 43: MPLS Training Guide Book

0201  Keep Alive

0404  Label Abort Request

0403  Label Release

0401  Label Request

0402  Label Withdraw Message

0400  Labels (Series)

0001  Notification

2.  Please refer to the tables in the following answer explanation.

0101  ADDRESS LIST

0201  ATM

0202  FRAME RELAY

0103  HOP COUNT

0400  KEEP ALIVE

0104  PATH LINK

0300  STATUS

The following tables are from the reference sheets for RFC 3036. A reference sheet from RFC 3036

Message Name Type Section Title

Notification 0x0001 Notification Message

Hello 0x0100 Hello Message

Initialization 0x0200 Initialization Message

KeepAlive 0x0201 KeepAlive Message

Address 0x0300 Address Message

Address Withdraw 0x0301 Address Withdraw Message

Label Mapping 0x0400 Label Mapping Message

Label Request 0x0401 Label Request Message

Label Withdraw 0x0402 Label Withdraw Message

Label Release 0x0403 Label Release Message

Label Abort Request 0x0404 Label Abort Request Message

Vendor-Private 0x3E00-0x3EFF LDP Vendor-private Extensions

Experimental 0x3F00-0X3FFF LDP Experimental Extensions

43

Page 44: MPLS Training Guide Book

3. TLV Summary

TLV Type Section Title

FEC 0x0100 FEC TLV

Address List 0x0101 Address List TLV

Hop Count 0x0103 Hop Count TLV

Path Vector 0x0104 Path Vector TLV

Generic Label 0x0200 Generic Label TLV

ATM Label 0x0201 ATM Label TLV

Frame Relay Label 0x0200 Frame Relay Label TLV

Status 0x0300 Status TLV

Extended Status 0x0301 Notification Message

Returned PDU 0x0302 Notification Message

Returned Message 0x0303 Notification Message

Common Hello 0x0400 Hello Message

IPv4 Transport Address 0x0401 Hello Message

Configuration 0x0402 Hello Message

IPv6 Transport Address 0x0403 Hello Message

Common Session 0x0500 Initialization Message

ATM Session Parameters 0x0501 Initialization Message

Frame Relay Session 0x0502 Initialization Message

Label Request 0x0600 Label Mapping Message

Vendor-Private 0x3E00-0X3EFF LDP Vendor-private Extensions

Chapter Summary and Review

In this chapter, we saw that one of several protocols can be used to dynamically program switches in order to build and implement cross-connect tables. We compared and contrasted various aspects of those protocols and examined the tradeoffs in each.

44

Page 45: MPLS Training Guide Book

Knowledge Review 

Answer the following questions in the spaces provided.1. What does a cross-connect table allow a router to do?2. What are the two methods used to load cross-connect tables?3. Within the ordered control mode of label distribution, two primary methods

are used to trigger the distribution of labels. What are these two methods? 4. What is the component of an MPLS network that creates cross-connect

tables?

Answers: 1. Process packets with incoming labels; 2. independent control and ordered control; 3. downstream unsolicited (DOU) and downstream on demand (DOD); 4. label switch router (LSR).

Going Further

Read how the IETF is considering discontinuing work on CR-LDP drafts:

CD-LDP vs. RSVP-TE: www.dataconnection.com/download/crldprsvp.pdf

45

Page 46: MPLS Training Guide Book

George Mason University: www.gmu.edu/news/release/mpls.html

Network Training: www.globalknowledge.com/

MPLS links page: www.rickgallaher.com/mplslinks.htm

MPLS Resource Center: http://MPLSRC.COM

RSVP: www.juniper.net/techcenter/techpapers/200006-08.html

Chapter 3: MPLS Signaling

Introduction

46

Page 47: MPLS Training Guide Book

In this chapter, we explore the fundamentals of MPLS signaling, the history of signaling, call setup procedures, traffic control measures, and the advantages and disadvantages of leading signaling and traffic control protocols. The chapter includes applications, examples, hands-on exercises, and resource links to augment the information presented.

Introduction to MPLS Signaling

Your commute to work every day is a long one, and it seems to take forever with all the congestion that you encounter. New lanes have recently been added to the highway, but they are reserved as express lanes. Sure,

47

Page 48: MPLS Training Guide Book

they would cut your travel time in half, but to use them you would have to carry extra passengers. You decide to try it; you decide to carry four additional passengers so that you can use the express lanes.

The four passengers do not cost much more to transport than yourself alone, and they allow you to both increase your speed markedly due to enabling you to use the express lanes and lower the rate of interference from the unpredictable and impossible-to-correct behavior of the routine traffic.

One day, you enter the express lanes and find that they are all mired in bumper-to-bumper congestion (see Figure 3.1). You are angry, of course, because you were guaranteed use of these lanes as express lanes, yet you are confronted with the same routine traffic you faced every day in the regular lanes. As you slowly make your way down the road, you see that construction has closed the routine lanes and diverted the traffic to your express lanes. So, what good is it to operate under this arrangement if regular traffic is simply going to be diverted onto your express lanes?

Figure 3.1: Backed-Up Express Lane

Traffic Control in MPLS Networks

In networking, MPLS is express traffic that carries four additional bytes of payload. For taking the effort to carry that extra data, it gets to travel the “express lanes.” But, as is too often the case with the actual freeway, the nice,

48

Page 49: MPLS Training Guide Book

smooth-running express lane that you’ve earned the right to use is subjected to the presence of rerouted routine traffic, thereby bringing you the congestion and slowdowns that you’ve worked to avoid.

Remember that MPLS is an overlay protocol that applies MPLS traffic to a routine IP network. The self-healing properties of IP may cause congestion on your express lanes. There is no accounting for the unforeseen traffic accidents and reroutes of routine traffic onto the express lanes. The Internet is self-healing, with resource capabilities, but the question that arises is this: How do users ensure that the paths and bandwidth reserved for their packets do not get overrun by rerouted traffic?

In Figure 3.2, we see a standard MPLS network with three different paths across the wide area network (WAN). Path A is engineered to the 90th percentile of bandwidth of peak busy hour; Path B is engineered to the 100th percentile bandwidth of peak busy hour; finally, Path C is engineered to the 120th percentile of peak busy hour. In theory, Path A will never have to contend with congestion, owing to sound network design (including traffic engineering). In other words, the road is engineered to take more traffic than it will receive during rush hour. The C network, however, will experience traffic jams during rush hour, because it is designed not to handle peak traffic conditions.

Figure 3.2: MPLS with Three Paths

The QoS in Path C will have some level of unpredictability regarding both jitter and dropped packets, whereas the traffic on Path A should have consistent QoS measurements.

In Figure 3.3, we see a network failure in Path C, and the traffic is rerouted (see Figure 3.4) onto an available path (Path A). Under these conditions, Path A is subjected to a loss of QoS criteria. To attain real QoS, there must be a method for controlling both traffic on the paths and the percentage of traffic that is allowed onto every engineered path.

49

Page 50: MPLS Training Guide Book

Figure 3.3: MPLS with a Failed Path C

Figure 3.4: MPLS with Congestion Caused by a Reroute

To help overcome the problems of rerouting congestion, the Internet Engineering Task Force (IETF) and related working groups have looked at several possible solutions. This problem had to be addressed both in protocols and in the software systems built into the routers.

In order to have full QoS, a system must be able to mark, classify, and police traffic. In previous chapters, we have seen how MPLS can classify and mark packets with labels, but the policing function has been missing. Routing and label distribution establish the LSPs but still do not police traffic and control the load factors on each link.

New software engines (see Figure 3.5), which add management modules between the routing functions and the path selector, allow for the policing and management of bandwidth. These functions, along with the addition of two protocols, allow for traffic policing.

50

Page 51: MPLS Training Guide Book

Figure 3.5: MPLS Routing State Machines

The two protocols that give MPLS the ability to police traffic and control loads are RSVP-TE and CR-LDP.

RSVP-TE

The concept of a call setup process, wherein resources are reserved before calls are established, goes back to the signaling theory days of telephony. This concept was adapted to data networking when QoS became an issue.

In 1997 the IETF designed an early method, called Resource Reservation Protocol (RSVP), for this very function. The protocol was designed to request required bandwidth and traffic conditions on a defined or explained path. If bandwidth was available under the stated conditions, the link would be established.

The link was established with three types of traffic that were similar to first-class, second-class, and standby air travel; the paths were called, respectively, guaranteed load, controlled load, and best-effort load.

RSVP with features added to accommodate MPLS traffic engineering is called RSVP-TE. The traffic engineering functions allow for the management of MPLS labels or colors.

In Figures 3.6 and 3.7, we see how a call or path is arranged between two endpoints. The client station requests a specific path, with detailed traffic conditions and treatment parameters included in the path request message. This message is received at the application server. The application server sends back a reservation to the client, reserving bandwidth on the network. After the first reservation message is received at the client, the data can start to flow in explicit paths from end to end.

51

Page 52: MPLS Training Guide Book

Figure 3.6: RSVP-TE PathRequest

Figure 3.7: RSVP-TE Reservation

This call setup (or “signaling”) process is called soft state because the call will be torn down if it is not refreshed in accordance with the refresh timers. In Figure 3.8, we see that path-request and reservation messages continue for as long as the data is flowing.

52

Page 53: MPLS Training Guide Book

Figure 3.8: RSVP-TE Path Setup

Some early arguments against RSVP included the problem of scalability: The more paths that were established, the more refresh messages that would be created, and the network would soon become overloaded with refresh messages. Methods of addressing this problem include preventing the traffic links and paths from becoming too granular and aggregating paths.

The details of an RSVP-TE path request and reservation can be viewed on the Ethereal.com Web site. In the sample (Figure 3.9), MPLS captures MPLS-TE files. In the capture, we can see the traffic specifications (TSPEC) for the controlled load.

Figure 3.9: RSVP-TE Details

CR-LDP

With Constraint-based Routing over Label Distribution Protocol (CR-LDP), modifications were made to the LDP protocol to allow for traffic specifications. The impetus for this design was the need to use an existing protocol (LDP) and give it traffic engineering capabilities. Nortel Networks made a major effort to launch the CR-LDP protocol.

The CR-LDP protocol adds fields to the LDP protocol. They are called peak, committed, and excess-data rates—terms very similar to those used for ATM networks. The frame format is shown in Figure 3.10.

53

Page 54: MPLS Training Guide Book

Figure 3.10: CR-LDP Frame Format

The call setup procedure for CR-LDP is a very simple two-step process, involving a request and a map (as shown in Figure 3.11). The reason for the simple setup is that CR-LDP is a hard-state protocol—meaning that, once established, the call, link, or path will not be broken down until a termination is requested.

Figure 3.11: CR-LDP Call Setup

The major advantage of a hard-state protocol is that it can and should be more scalable because less “chatter” is required in keeping the link active.

Comparing CR-LDP to RSVP-TE

The technical comparisons of the CR-LDP and RSVP-TE protocols are listed in Table 3.1. We see that CR-LDP uses the LDP protocol as its carrier, whereas RSVP-TE uses the RSVP protocol. RSVP is typically paired with IntServ’s detection of QoS, whereas the CR-LDP protocol uses ATM’s traffic engineering terms to map QoS.Table 3.1: CR-LDP vs. RSVP-TE

54

Page 55: MPLS Training Guide Book

Checkpoint  Answer the following true/false questions.1. Under-provisioning does not affect QoS measurements.2. RSVP attempts to reroute or redirect traffic in the event of congestion. 3. “Soft-state” signaling requires timed refresh messages. 4. CR-LDP allows for traffic engineering without adding fields to LDP.

Answers: 1. False; 2. true; 3. true; 4. false.

55

Page 56: MPLS Training Guide Book

Practical Applications: Signaling Different Types of RSVP-TE Paths

The command syntax may look complex, but adding RSVP is as simple as adding LDP. The important thing to note here is that LDP and RSVP cannot run on the same interface.

In Chapter 2, Figure 2.15, we completed steps of defining interfaces, adding OSPF, and adding MPLS to the interfaces. We completed the commands by adding LDP in Figure 2.16.

Instead of adding LDP now, we are going to add RSVP. We accomplish this by the following simple steps:1. Create the path.2. Add RSVP to each interface.3. Start RSVP.

Riverstone graciously provided the following demonstrations as an example of RSVP setup.

Extending RSVP for MPLS Networks

Standards track protocol defined by RFC 3209, “RSVP-TE: Extensions to RSVP for LSP Tunnels.” The Applicability statement for RSVP-TE is described by RFC 3210.

Signaling a Path Using RSVP-TE

RSVP-TE can be used to signal nd explicit paths through an MPLS network. Once the network is MPLS ready and the link state routing protocol has been deployed, with or without traffic engineering extensions, a dynamically signaled LSP can be established by simply configuring the instantiating router. Traffic engineering can be applied to either of these signaling approaches. Creating an RSVP path through a network is a rather simple process.

Hop by Hop

The hop-by-hop method determines a path through the network based on the interior gateway protocol’s view of the network. If no constraints are applied to the LSP, the instantiating router simply sends the request for a path to the active next hop for that destination, with no explicit routing. The IGP at each router is free to select active next hops based on the link state database. In the event of path failure, such as a link failure somewhere in the network, the hop-by-hop method will eventually establish a path around the failure based on updated link state database information. Reoptimization is under development on the RS platform.

To create a simple hop-by-hop path, use the command shown in Figure 3.12. More specific commands are shown in Figures 3.13 and 3.14. In this example, we continue to build upon the network that we created in Chapter 2. After reviewing the previously covered commands in Figure 3.14, we go on to build the network one step at a time by adding MPLS and RSVP to the path (see Figure 3.15).

Figure 3.12: Simple RSVP Command Overview

56

Page 57: MPLS Training Guide Book

Figure 3.13: RSVP Path Request

Figure 3.14: Previously Covered Commands

Figure 3.15: Show LSP ALL

The sample network in Figure 3.13 shows how an instantiating router requests a hop-by-hop, end-to-end RSVP path through the MPLS network to a destination, without any constraints or resource requirements.

57

Page 58: MPLS Training Guide Book

The northernmost router represented the active next hop for the destination, and the instantiating router followed the information in the forwarding information base and sent the RSVP request to the active next hop indicated in the FIB. The result: The IGP used the shortest path between the edge routers over which to signal and establish the path.

RSVP Hop-by-Hop Show Commands

Figure 3.15 represents an MPLS show label-switched-path command detail. This command is used to display high-level LSP information, including start and end points, state, and labels used.

A more detailed view of the LSP information can be found using the verbose option, as shown in Figure 3.16. This includes the various session attributes for the LSP and the associated path information. The path information includes path attributes, labels associated with the LSP, timers, resource constraints, and the confirmation the path the LSP has taken through the MPLS network (record-route).

Figure 3.16: Show LSP Verbose

The same display commands can be used on the transit router for this LSP. Remember, an outbound label 3 indicates penultimate hop pop is performed on the router preceding the last router in the LSP. When this is done, the router makes a forwarding decision based on the inbound label sending it to the next hop without applying a new upper-level label on the outbound.

Explicit Route Objects

The hop-by-hop method allows the IGP to select the path through the network. However, many benefits can be realized by having the instantiating router dictate the hops an LSP will traverse. The explicit route object (ERO) is the creation and inclusion of the list of routers that comprise the most suitable path through the MPLS network. This is analogous to the source routing, where the instantiating router dictates, either in whole or in part, the path through the network.

58

Page 59: MPLS Training Guide Book

The ERO object may contain two kinds of explicit routes: strict or loose hops. A strict hop indicates that the two nodes must be adjacent to one another, with no intermediate hops separating them. A loose hop indicates that the nodes do not have to be adjacent to each other and the IGP can be used to determine the best path to the loose hop. This allows the router building the ERO to apply some abstract level of configuration, indicating that the path needs to traverse a particular router without dictating how to reach that hop. By default, any hop specified as part of the ERO is strict unless otherwise configured as loose. Information contained in the ERO is stored in the path state block for each router. Currently, implementations on the RS platform support loose and strict routing in the form of IP addresses.

The Internet draft defines the fields of the ERO subobject as shown in Figure 3.17 and described in the following list.

Figure 3.17: ERO Subobject Fields

L: The disposition of the particular hop. A value of 0 indicates the subobject is strict. This is the default if the configuration omits the Type field for this hop. A value of 1 indicates the type of this hop is loose.

Type: A seven-bit field indicating the value of the subobject’s contents (see Figure 3.18). The draft currently defines four reserved values. Of these, Riverstone supports IP addressing.

Figure 3.18: Four Reserved Values

Autonomous systemLength: An 8-bit field that represents the number of bytes for the entire subobject, inclusive of all fields.

Subobject Contents: The addressing information specific to the type. A minimum of 2 bytes represents the smallest possible type field, AS Number.

Configuring an explicit route on the RS platform is done by creating a path with a specified number of hops, defining those hops with their disposition, strict or loose, and associating that path to an LSP as primary or secondary.

Note: If the path is created without specifying any number of hops, the interior gateway protocol determines the active next hop for the destination and sends the request to that node. It is equivalent to creating a hop-by-hop path, with no explicit route.

To create the path with an explicit route:

59

Page 60: MPLS Training Guide Book

RS(config)# mpls create path <name> num-hops <number>

To define the hops for a created path:

RS(config)# mpls set path <name> hop <number> ip-addr <ip-address> type <strict|loose>

Associating a path to an LSP:

RS(config)# mpls set label-switched-path <name> primary|secondary <path name>

A Strict Routing Example

Here we configure a completely strict route from ingress (LER1) to egress (LER2). An overview of this network is shown in Figure 3.19.Figure 3.20 shows the commands in standalone detail. Notice how every router must be specified and that the routing is strict. Note: The items in bold are replacement items for RSVP or LDP commands.

Figure 3.19: Unique RSVP Strict Path Commands

Figure 3.20: Unique RSVP Strict Path Commands

60

Page 61: MPLS Training Guide Book

The resulting subobjects of the ERO would look like this:

L=0;Type=4;Length=64;Contents=192.168.1.6 (instantiating router interface)

L=0;Type=4;Length=64;Contents=192.168.1.5

L=0;Type=4;Length=64;Contents=192.168.1.26

L=0;Type=4;Length=64;Contents=192.168.1.22

Explicit route data is logged in the “explicit-path” field of LDP information.

LER1# mpls show label-switched-paths all verbose

Ingress LSP:

Label-Switched-Path: "LSP"

state: Up lsp-id: 0x9

status: Success

to: 2.2.2.2 from: 2.2.2.1

proto: <rsvp> protection: primary

setup-pri: 7 hold-pri: 0

attributes: <FROM_ADDR PRI>

Protection-Path "ERO-Path1": <Active, Primary>

state: Up lsp-id: 0x4002

status: Success

attributes: <>

inherited-attributes: <>

Path-Signaling-Parameters:

attributes: <>

inherited-attributes: <NO-CSPF>

label in: label out: 17

retry-limit: 5000 retry-int: 15 sec.

retry-count: 5000 next_retry_int: 0.000000 sec.

preference: 7 metric: 1

ott-index: 1 ref-count: 1

bps: 0 mtu: 1500

hop-limit: 255 opt-int: 600 sec.

explicit-path: "ERO-Path1" num-hops: 4

192.168.1.6 - strict

192.168.1.5 - strict

192.168.1.26 - strict

192.168.1.22 - strict

record-route:

192.168.1.5

192.168.1.26

61

Page 62: MPLS Training Guide Book

192.168.1.22

Transit LSP:

Egress LSP:

This example demonstrates a slightly different approach, using loose hops and the loopback interfaces on the routers instead of physical interface addressing. In this example, all traffic is forced through a single router, LSR3, by coding one of the loose hops to be the IP address of that router’s loopback interface. Also notice that the first hop and last hop are actually the tunnel end points. In essence, the path can be established across any links as long as one of the transit nodes is LSR3. This provides a certain level of link failure protection but still leaves the single point of failure, should LSR3 become unusable. Loose and strict routes may be combined in a path message, allowing for the description of a path that must pass through some hops in an specific point-to-point fashion (strict) while other parts of the path may be derived by the IGP (loose). In Figures 3.19 and 3.20, we saw an example of strict RSVP. Notice how this has changed to loose routing in Figures 3.21 and 3.22

Figure 3.21: RSVP Loose Routing (View 1)

Figure 3.22: RSVP with Loose Routing (View 2)

interface create ip To-LSR1 address-netmask 192.168.1.2/30 port gi.2.1

interface create ip To-LSR2 address-netmask 192.168.1.6/30 port gi.2.2

62

Page 63: MPLS Training Guide Book

interface add ip lo0 address-netmask 2.2.2.1/32

ip-router global set router-id 2.2.2.1

ospf create area backbone

ospf add interface To-LSR1 to-area backbone

ospf add interface To-LSR2 to-area backbone

ospf add stub-host 2.2.2.1 to-area backbone cost 10

ospf start

mpls add interface To-LSR1

mpls add interface To-LSR2

mpls create path To-LER2-Prime num-hops 3

mpls set path To-LER2-Prime hop 1 ip-addr 2.2.2.1 type strict

mpls set path To-LER2-Prime hop 2 ip-addr 1.1.1.3 type loose

mpls set path To-LER2-Prime hop 3 ip-addr 2.2.2.2 type loose

mpls create label-switched-path To-LER2-1 from 2.2.2.1 to 2.2.2.2 no-cspf

mpls set label-switched-path To-LER2-1 primary To-LER2-Prime

mpls start

rsvp add interface To-LSR1

rsvp add interface To-LSR2

rsvp start

ospf set traffic-engineering on

The resulting subobjects of the ERO would look like this:

L=0;Type=4;Length=64;Contents=2.2.2.1 (instantiating router interface)

L=1;Type=4;Length=64;Contents=1.1.1.1

L=1;Type=4;Length=64;Contents=2.2.2.2

Looking at the LSP information reveals the differences between the two approaches. The completely strict explicit route was a kind of “this way or no way” approach to signaling and establishing the LSP. If any node or link specified in the explicit path failed, that path statement would fail the LSP. The loose approach provides slightly more resilience at the expense of complete control. Should the path be established and a node or link that the LSP was using fail, the instantiating router would determine a new path through the network, signal, and establish it.

LER1# mpls show label-switched-paths all verbose

Ingress LSP:

Label-Switched-Path: "LSP"

state: Up lsp-id: 0x9

status: Success

to: 2.2.2.2 from: 2.2.2.1

proto: <rsvp> protection: primary

setup-pri: 7 hold-pri: 0

attributes: <FROM_ADDR PRI>

63

Page 64: MPLS Training Guide Book

Protection-Path "ERO-Path1": <Active, Primary>

state: Up lsp-id: 0x4003

status: Success

attributes: <>

inherited-attributes: <>

Path-Signaling-Parameters:

attributes: <>

inherited-attributes: <NO-CSPF>

label in: label out: 17

retry-limit: 5000 retry-int: 15 sec.

retry-count: 5000 next_retry_int: 0.000000 sec.

preference: 7 metric: 1

ott-index: 1 ref-count: 1

bps: 0 mtu: 1500

hop-limit: 255 opt-int: 600 sec.

explicit-path: "ERO-Path1" num-hops: 3

2.2.2.1 - strict

1.1.1.3 - loose

2.2.2.2 - loose

record-route:

192.168.1.1

192.168.1.14

192.168.1.22

Transit LSP:

Egress LSP:

The actual explicit route object is built using a combination of the IGP forwarding table and the manually configured hops. For example, the first hop in the path was derived from the forwarding table based on the next best hop for the loose route of 1.1.1.3. Similarly, the next best hop from 1.1.1.3 to 2.2.2.2 was also determined from the routing table. Configuration 1 is shown in Figure 3.23.

64

Page 65: MPLS Training Guide Book

Figure 3.23: Configuration Example 1

Assuming there was a failure, as long as the loopback interfaces specified as part of the ERO are reachable, the IGP will converge and the path will be established across an alternate set of transit routers. An example is presented in Figure 3.25, Configuration 2.

Figure 3.24: Configuration Example 2

LER1# mpls show label-switched-paths all verbose

Ingress LSP:

Label-Switched-Path: "LSP"

state: Up lsp-id: 0x9

status: Success

to: 2.2.2.2 from: 2.2.2.1

proto: <rsvp> protection: primary

setup-pri: 7 hold-pri: 0

attributes: <FROM_ADDR PRI>

65

Page 66: MPLS Training Guide Book

Protection-Path "ERO-Path1": <Active, Primary>

state: Up lsp-id: 0x4003

status: Success

attributes: <>

inherited-attributes: <>

Path-Signalling-Parameters:

attributes: <>

inherited-attributes: <NO-CSPF>

label in: label out: 17

retry-limit: 5000 retry-int: 15 sec.

retry-count: 5000 next_retry_int: 0.000000 sec.

preference: 7 metric: 1

ott-index: 1 ref-count: 1

bps: 0 mtu: 1500

hop-limit: 255 opt-int: 600 sec.

explicit-path: "ERO-Path1" num-hops: 3

2.2.2.1 - strict

1.1.1.3 - loose

2.2.2.2 - loose

record-route:

192.168.1.5

192.168.1.26

192.168.1.22

Transit LSP:

Egress LSP:

66

Page 67: MPLS Training Guide Book

Exercise 3.1: Decode the RSVP-TE Message

There are several ways to complete this lab. The exercise itself is written in standalone form so that you do not need any products to complete the exercises. Just skip the hands-on block.

Hands-On 

If you are the “hands-on” type and you want to see MPLS packets on a protocol analyzer, you will need two items of software: a copy of Ethereal (www.ethereal.com) and the sample (www.ethereal.com/sample) called MPLS-TE. (If you do not have a link to the Internet, you can find these captures in the appendix.)

If this is the only protocol analyzer present on your computer, you can open the file called MPLS_basic by clicking on it. If you have another protocol analyzer, you will have to open the Ethereal program and open the file from the menu:

1. From your desktop, go to Start | Programs; find and double-click Ethereal. 2. Once the Ethereal program opens, open the file called MPLS_TE. 3. Wait for the file to open. It will take a few minutes.4. Find frames 3 and 4 (RSVP).5. Follow the steps in the lab.

RSVP Path Request

In this portion of the lab, you will review RSVP-TE and look for the path and the label request.

1.  Look at Frame 3 in Figure 3.25.

Figure 3.25: RSVP Overview

2.  Find and highlight the strict routing path in Figure 3.26.

67

Page 68: MPLS Training Guide Book

Figure 3.26: RSVP Detail

3.  For what type of traffic is the label requested?

4.  What is the C-Type on the request?

Answers

1. 

2.  210.0.0.2

204.0.0.1

207.0.0.1

202.0.0.1

201.0.0.1

200.0.0.1

16.2.2.2

3.  19 Label Request Object

4.  1

RSVP Path Request

In this portion of the lab, you will review RSVP-TE and look for both the RSVP reservation type and the assigned label.

1. Look at Frame 4 in Figure 3.27 and in detail in Figure 3.28.

68

Page 70: MPLS Training Guide Book

In the industry today, we find that, while Cisco and Juniper favor the RSVP-TE model and Nortel favors the CR-LDP model, both signaling protocols are supported by most vendors.

The jury is still very much out as to the scalability, recovery, and interoperability between the signaling protocols; however, it appears from the sidelines that the RSVP-TE protocol may be in the lead. This is not because it is the less “chatty” or the more robust of the two; it is due more to the fact that RSVP was an established protocol, with most of its bugs removed, prior to the inception of MPLS. Both protocols remain the topics of study by major universities and vendors. In the months to come, we will see test results and market domination affect the continued emergence of these protocols.

Knowledge Review 

Answer the following questions.1. What signaling protocol was designed to request the required bandwidth

and traffic conditions on a defined or explained path? 2. What is one of the primary concerns in using RSVP-TE? 3. What is the primary advantage that hard-state signaling protocols offer over

soft-state signaling protocols? 4. In RSVP, a “best-effort load” equates to which of the following classes of

travel: first-class, second-class, or standby?

Answers: 1. Resource Reservation Protocol (RSVP); 2. scalability; 3. greater potential in scalability; 4. standby.

Going Further

70

Page 71: MPLS Training Guide Book

CR-LDP vs. RSVP-TE: www.dataconnection.com/download/crldprsvp.pdf

George Mason University: www.gmu.edu/news/release/mpls.html

MPLS links page: www.rickgallaher.com/mplslinks.htm

MPLS Resource Center: http://MPLSRC.COM

www.sce.carleton.ca/courses/94581/student_projects/LDP_RSVP.PDF

www.sce.carleton.ca/courses/94581/student_projects/LDP_IntServ.PDF

Chapter 4: MPLS Network Reliance and Recovery

71

Page 72: MPLS Training Guide Book

Introduction

It is the dream of every provider (and every customer) to have a failure-free network, but that’s only the tip of the iceberg. That ideal doesn’t translate easily into reality, and some careful considerations of effectiveness vs. affordability must be made. Additionally, the degree to which a network is equipped to handle and recover from failures is as important as safeguarding a network from failures in the first place. In this chapter, we discuss ways of protecting your network, means of ensuring rapid recovery, and the need for reliability. The chapter includes examples, hands-on exercises, and practical applications from Riverstone to strengthen the material presented.

Introduction to MPLS Network Reliance and Recovery

72

Page 73: MPLS Training Guide Book

Around the country, you will find highways under repair. A good many of these highways have bypass roads or detours to allow traffic to keep moving around the construction or problem areas. Traffic rerouting is a real challenge for highway departments, but they have learned that establishing detour paths before construction begins is the only way to keep traffic moving (see Figure 4.1).

Figure 4.1: Traffic Detour

The commitment to keep traffic moving has been a staple of philosophy in voice and telephone communications since their inception. In a telephony network, not only are detour paths assigned before a circuit is disconnected (make before break), but the backup or detour paths must be of at least the same quality as the links that are to be taken down for repair. These paths are said to be prequalified (tested) and preprovisioned (already in place).

Historically in IP networking, packets found their own detours around problem areas; there were no preprovisioned bypass roads. The packets were in no particular hurry to get to their destinations. However, with the convergence of voice communications onto data networks, the packets need these bypass roads to be preprovisioned so that they do not have to slow down for the equivalents of construction or road failures.

The Need for Network Protection

73

Page 74: MPLS Training Guide Book

MPLS has been implemented primarily in the core of an IP network. Often, MPLS competes head to head with ATM networks; therefore, it would be expected to behave like an ATM switch in the case of network failure.

With a failure in a routed network, recovery could take anywhere from a few tenths of a second to several minutes. MPLS, however, must recover from a failure within milliseconds; the most common standard is 60 milliseconds. To further complicate the recovery process, an MPLS recovery must ensure that traffic can continue to flow with the same quality as it did before the failure. So, the challenge for MPLS networks is to detect a problem and switch over to a path of equal quality within 60ms.

Failure Detection and Tradeoffs

Two primary methods are used to detect network failures: heartbeat detection (or polling) and error messaging. The heartbeat method (used in fast switching) detects and recovers from errors more rapidly but uses more network resources. The error message method requires far less network resources but is a slower method. Figure 4.2 shows the tradeoffs between the heartbeat and error message methods of failure detection.

Figure 4.2: Heartbeat vs. Error Message Failure Detection

The heartbeat method (see Figure 4.3) uses a simple solution to detect failures. Each device advertises that it is alive to a network manager at a prescribed interval of time—hence the term heartbeat. If a heartbeat is missed, the path, link, or node is declared as being failed and a switchover is then performed. The heartbeat method requires considerable overhead functions—the more frequent the heartbeat, the higher the overhead. For instance, in order to achieve a 50ms switchover, heartbeats would need to occur about every 10ms.

Figure 4.3: Heartbeat Method

74

Page 75: MPLS Training Guide Book

In the other failure-detection system, the error message detection method (see Figure 4.4), when a device on the network detects an error, it sends a message to its neighbors, ordering them to redirect traffic to a path or router that is working. Most routing protocols use adaptations of this method. The advantage of the error message is that network overhead is low. The disadvantage is that it takes time to send the error-and-redirect message to the network components. Another disadvantage is that the error messages might never arrive at the downstream routers.

Figure 4.4: Error Messages

If switchover time is not critical (as it has historically been in data networks), the error message method works fine; however, in a time-critical switchover, the heartbeat method is often the better choice for optimal failure recovery.

Reviewing Routing

Remember that in a routed network (see Figure 4.5), data is connectionless, with no real QoS. Packets are routed from network to network via routers and routing tables.

75

Page 76: MPLS Training Guide Book

Figure 4.5: Standard Routing

Alternative Paths

If a link or router fails, an alternative path is eventually found and traffic is delivered. If packets are dropped in the process, a Layer 4 protocol (such as TCP) will retransmit the missing data.

This method works well for transmitting non-real-time data, but when it comes to sending real-time packets (such as voice and video), delays and dropped packets are not tolerable.

IGP Rapid Convergence

To address routing-convergence problems, OSPF and IGP working groups have developed IGP Rapid Convergence, which reduces the convergence time of a routed network to approximately 1 second.

The benefits of using IGP Rapid Convergence include both increased overhead functions and traffic on the network; however, it only addresses half the problem posed by MPLS. The challenge of maintaining QoS parameter tunnels is not addressed by this solution.

Network Protection

76

Page 77: MPLS Training Guide Book

In a network, there are several potential points of failure. Two major types of failures are link failure and node failure (see Figure 4.6). Minor failures could involve switch hardware, switch software, switch databases, and/or link degradation.

Figure 4.6: Network Failures

The telecommunications industry has historically addressed link failures with two types of fault-tolerant network designs: one-to-one redundancy and one-to-many redundancy. Another commonly used network protection tactic employs fault-tolerant hardware.

To protect an MPLS network, you could preprovision a spare path with exact QoS and traffic-processing characteristics. This path would be spatially diverse and would be continually exercised and tested for operations. However, it would not be placed online unless there was a failure on the primary protected path. This method, known as one-to-one redundancy protection (see Figure 4.7), yields the most protection and reliability, but the cost of its implementation can be extreme.

Figure 4.7: One-to-One Redundancy

A second protection scheme is one-to-many redundancy protection. Using this method, the backup path takes over when one path fails. The network shown in Figure 4.8 can handle a single path failure but not two path failures.

Figure 4.8: One-to-Many Redundancy

A third protection method is the use of fault-tolerant switches (see Figure 4.9). In this design, every switch features built-in redundant functions—from power supplies to network cards. Figure 4.9 shows redundant network

77

Page 78: MPLS Training Guide Book

cards with a backup controller. Note that the one item in common, and not redundant, is the cross-connect table. If switching data becomes corrupt, fault-tolerant hardware cannot address the problem.

Figure 4.9: Fault-Tolerant Equipment

Now that we have examined the three network protection designs (one-to-one, one-to-many, and fault-tolerant hardware) and two methods for detecting a network failure (heartbeat and error message), we need to talk about which layers and protocols are responsible for fault detection and recovery.

Fault Detection and Recovery

Given that the further the data progresses up the OSI stack, the longer that its recovery will take, it makes sense to attempt to detect failures at the physical level first.

MPLS could rely on the Layer 1 or Layer 2 protocols to perform error detection and correction. MPLS could either run on a protected SONET ring or use ATM and Frame Relay fault-management programs for link and path protection. In addition to the protection that MPLS networks could secure via SONET, ATM, or Frame Relay, IP has its recovery mechanisms in routing protocols like OSPF or IGP.

With all these levels of protection already in place, why does MPLS need additional protection? Because there is no protocol that is responsible for ensuring the quality of the link, tunnel, or call placed on an MPLS link. The MPLS failure-recovery protocol must not only perform rapid switching, it must also ensure that the selected path is prequalified to handle traffic loads while maintaining QoS conditions. If traffic loads become a problem, MPLS must be able to offload lower-priority traffic to other links.

Knowing that MPLS must be responsible for sending traffic from a failed link to a link of equal quality, let’s look at two error-detection methods as they apply to MPLS.

Checkpoint  Answer the following questions.1. What is considered to be standard recovery time in the event of a network

failure?2. What are the most common methods used to detect network failures in MPLS?3. What are some of the benefits of using IGP rapid convergence as a means to

address routing convergence issues? 4. Name two major possible points of failure in a network.

Answers: 1. 60 milliseconds; 2. the heartbeat method and the error-message method; 3. increased overhead functions as well as reduced routing time when rerouting network traffic; 4. link failure and node failure.

MPLS Error Detection

78

Page 79: MPLS Training Guide Book

LDP and CR-LDP are protocols that contain an error message type-length value (TLV) to report link and node errors. However, there are two main disadvantages to this method. First, it takes time to send the error message. Second, since LDP is a connection-oriented message, the notification message might never arrive if the link is down.

An alternative approach to error detection is to use the heartbeat method that is found at the foundation of the RSVP-TE protocol. RSVP has features that make it a good alternative for an error message model. RSVP is a soft-state protocol that requires refreshing—in other words, if the link is not refreshed, that link is torn down. No error messages are required, and rapid recovery (rapid reroute) is possible if there is a preprovisioned path. If RSVP-TE is already used as a signaling protocol, the additional overhead required for rapid recovery is insignificant.

RRR

Rapid Reroute (RRR) is a process in which a link failure can be detected without the need for signaling. Because RSVP-TE offers soft-state signaling, it can handle a rapid reroute. Many vendors are using the RSVP-TE for rapid recovery of tunnels and calls, but in so doing, other MPLS options are restricted (for example, labels are allocated per switch, not per interface). Another restriction of RSVP-TE is that it must also be used for an across-the-board signaling protocol.

RSVP-TE Protection

In RSVP-TE, two methods are used to protect the network: link protection and node protection.

In link protection, a single link is protected with a preprovisioned backup link. If there is a failure in the link, the switches will open the preprovisioned path (see Figure 4.10).

Figure 4.10: RSVP-TE with Link Protection

In a node failure, an entire node or switch could fail; thus, all links attached to that node could fail. With node protection, a preprovisioned tunnel is provided around the failed node (see Figure 4.11).

79

Page 80: MPLS Training Guide Book

Figure 4.11: RSVP-TE with Node Protection

80

Page 81: MPLS Training Guide Book

Thrashing Links

A discussion of fault-tolerant networks would not be complete without mentioning thrashing links. Thrashing is a phenomenon that occurs when paths are quickly switched back and forth. For example, in a network with two paths (primary and backup), the primary path fails and the backup path is placed in service. The primary path self-heals and is switched back into service, only to fail again.

Thrashing is primarily caused by intermittent failures of primary paths and preprogrammed switchback timers. In order to overcome thrashing, the acting protocols and switches must use hold-down times. For example, some programs allow 1 minute for the first hold-down time and set a trigger so that, on the second switchback, operator intervention is required to perform a switchover and prevent thrashing.

Checkpoint  Answer the following questions.1. Why is it that MPLS is in need of additional degrees of protection other than the

protections provided by other protocols?2. Name two types of RSVP-TE network protection.3. Name the primary method used to avoid thrashing links.

Answers: 1. Because there is no protocol responsible for ensuring the quality of the link tunnel or call placed on an MPLS link; 2. link protection and node protection; 3. hold-down time: the process by which a secondary path is kept in place for a minimum amount of time prior to switching back to the primary path.

81

Page 82: MPLS Training Guide Book

Practical Applications

Riverstone graciously provided the information for the following practical applications.

Designating Backup Paths

Creating a strict explicit path using only a single primary path provides pinpoint control over how the data associated to the label switch path will flow. However, in the event of failure, service must be able to recover without manual intervention. Loose explicit routing allows RSVP-TE to signal a new path around a failure, at the expense of control.

Figures 4.12–4.16 show this configuration. Figure 4.12 shows the generic command set on a top level. Figure 4.13 shows the configuration example. Figures 4.14 and 4.15 build from that configuration. In Figure 4.14, we see the commands previously shown in Chapter 3. Figure 4.15 shows the four basic steps needed to configure a backup path, and Figure 4.16 shows the correct syntax.

Figure 4.12: Top-Level Configuration

Figure 4.13: Configuration Example

82

Page 83: MPLS Training Guide Book

Figure 4.14: Previous RSVP Command

Figure 4.15: Four Steps to Configure Back Path

Figure 4.16: Detailed Syntax Commands for Secondary Path

83

Page 84: MPLS Training Guide Book

Two categories, primary or secondary, are assigned to paths when they’re assigned to an LSP. There can only be a single primary path associated to an LSP. It is always preferred over any secondarily designated path. Numerous secondary paths can be associated to an LSP, with a configurable order of preference. The obvious benefit to this approach is the ability to define disparate paths across the backbone, should the network have such a physical configuration.

To associate an existing path to an LSP as a primary or secondary:

RS(config)# mpls set label-switched-path <name> primary|secondary <path

name>

Secondary paths are selected based on preference, with the higher numerical value preferred. To define the preference of secondary paths:

RS(config)# mpls set label-switched-path <name> secondary <path name>

preference <1-255>

Finally, the secondary path may be defined as a hot or cold standby. A cold standby is the default action and will not be established until the primary has failed. This means that once the instantiating router realizes the primary path is no longer valid, the preferred secondary must be signalled and established before it can be used. A hot standby is the ability to have a pre-established backup that takes for a failed primary as soon as it is recognized. To configure a hot standby:

RS(config)# mpls set label-switched-path <name> secondary <path name>

preference <1-255> standby

One possible solution is to define a completely explicit primary route that is best suited for the traffic carried within the LSP. To protect the primary path, a completely disparate path capable of servicing the LSP could exist and be explicitly configured as a preferred secondary. Since there are no common transit points, it might make sense to configure this secondary path as a hot standby. Finally, a less preferred hop-by-hop backup path can be configured as a backup of last resort. Its main role is to try to establish a path through a network that has already observed multiple failures that have impacted service on the primary and preferred secondary paths. It is pointless to pre-establish this path, because it would be unforeseeable to predetermine what paths may remain from ingress to egress.

Recall the configurations from Chapters 2 and 3. In Chapter 3, we established RSVP strict paths. The items in bold in the figures are new to the configuration.

The detailed commands are as simple as the four steps shown in Figure 4.15 and 4.16.

Commands are shown in bold in Figure 4.16. The commands in black were covered in Chapter 3.

The summary of commands is as follows, with new commands shown in bold:

interface create ip To-LSR1 address-netmask 192.168.1.2/30 port gi.2.1

interface create ip To-LSR2 address-netmask 192.168.1.6/30 port gi.2.2

interface add ip lo0 address-netmask 2.2.2.1/32

ip-router global set router-id 2.2.2.1

ospf create area backbone

ospf add interface To-LSR1 to-area backbone

ospf add interface To-LSR2 to-area backbone

ospf add stub-host 2.2.2.1 to-area backbone cost 10

ospf start

mpls add interface To-LSR1

84

Page 85: MPLS Training Guide Book

mpls add interface To-LSR2

mpls create path ERO-Path1 num-hops 4

mpls create path ERO-Path2 num-hops 3

mpls create path Path-Back1

mpls set path ERO-Path1 hop 1 ip-addr 192.168.1.6 type strict

mpls set path ERO-Path1 hop 2 ip-addr 192.168.1.5 type strict

mpls set path ERO-Path1 hop 3 ip-addr 192.168.1.26 type strict

mpls set path ERO-Path1 hop 4 ip-addr 192.168.1.22 type strict

mpls set path ERO-Path2 hop 1 ip-addr 192.168.1.2 type strict

mpls set path ERO-Path2 hop 2 ip-addr 192.168.1.1 type strict

mpls set path ERO-Path2 hop 3 ip-addr 192.168.1.18 type strict

mpls create label-switched-path LSP from 2.2.2.1 to 2.2.2.2 no-cspf

mpls set label-switched-path LSP primary ERO-Path1

mpls set label-switched-path LSP secondary ERO-Path2 preference 100 standby

mpls set label-switched-path LSP secondary Path-Back1 preference 10

mpls start

rsvp add interface To-LSR1

rsvp add interface To-LSR2

rsvp start

ospf set traffic-engineering on

Show Paths

Show paths are similar to the material covered in Chapters 2 and 3. A look at the high-level LSP information indicates only the LSP-based information, not the underlying path information.

LER1# mpls show label-switched-paths all

Ingress LSP:

LSPname To From State LabelIn

LabelOut

LSP 2.2.2.2 2.2.2.1 Up - 17

Transit LSP:

LSPname To From State LabelIn

LabelOut

Egress LSP:

LSPname To From State LabelIn

LabelOut

In order to check the path information, the verbose option must be coded as part of the display command. When a specific path is being used to forward traffic for the LSP, the status of the protection-path field state is <Active,

85

Page 86: MPLS Training Guide Book

disposition>. In steady state, the primary path will be filling the active role. However, during times of network issues, one of the secondary paths will enter the active state based on order of preference. Console messaging will show transition events.

LER1# mpls show label-switched-paths all verbose

Ingress LSP:

Label-Switched-Path: "LSP"

state: Up lsp-id: 0xa

status: Success

to: 2.2.2.2 from: 2.2.2.1

proto: <rsvp> protection: primary

setup-pri: 7 hold-pri: 0

attributes: <FROM_ADDR PRI SEC>

Protection-Path "ERO-Path1": <Active, Primary>

state: Up lsp-id: 0x4004

status: Success

attributes: <>

inherited-attributes: <>

Path-Signalling-Parameters:

attributes: <>

inherited-attributes: <NO-CSPF>

label in: label out: 17

retry-limit: 5000 retry-int: 15 sec.

retry-count: 5000 next_retry_int: 0.000000 sec.

preference: 7 metric: 1

ott-index: 3 ref-count: 1

bps: 0 mtu: 1500

hop-limit: 255 opt-int: 600 sec.

explicit-path: "ERO-Path1" num-hops: 4

192.168.1.6 - strict

192.168.1.5 - strict

192.168.1.26 - strict

192.168.1.22 - strict

record-route:

192.168.1.5

192.168.1.26

192.168.1.22

Protection-Path "Path-Back1": <Secondary>

86

Page 87: MPLS Training Guide Book

state: Null lsp-id: 0x4015

status: Success

attributes: <PREF>

inherited-attributes: <>

Path-Signalling-Parameters:

attributes: <>

inherited-attributes: <NO-CSPF>

label in: label out:

retry-limit: 5000 retry-int: 15 sec.

retry-count: 5000 next_retry_int: 0.000000 sec.

preference: 10 metric: 1

ott-index: 0 ref-count: 0

bps: 0 mtu: 1500

hop-limit: 255 opt-int: 600 sec.

explicit-path: "Path-Back1" num-hops: 0

Protection-Path "ERO-Path2": <Secondary>

state: Up lsp-id: 0x4012

status: Success

attributes: <PREF>

inherited-attributes: <>

Path-Signalling-Parameters:

attributes: <STANDBY>

inherited-attributes: <NO-CSPF>

label in: label out: 17

retry-limit: 5000 retry-int: 15 sec.

retry-count: 5000 next_retry_int: 0.000000 sec.

preference: 100 metric: 1

ott-index: 1 ref-count: 1

bps: 0 mtu: 1500

hop-limit: 255 opt-int: 600 sec.

explicit-path: "ERO-Path2" num-hops: 3

192.168.1.2 - strict

192.168.1.1 - strict

192.168.1.18 - strict

record-route:

192.168.1.1

192.168.1.18

Transit LSP:

87

Page 88: MPLS Training Guide Book

Egress LSP:

When a primary fails and one of the secondary paths assumes the active role, a message is written to the console indicating that the primary has failed and which secondary path is taking over.

%MPLS-I-LSPPATHSWITCH, LSP "LSP" switching to Secondary Path "ERO-Path2".

When the primary path is re-established, it automatically assumes the active role and the associated message is written to the console.

MPLS-I-LSPPATHSWITCH, LSP "LSP" switching to Primary Path "ERO-Path1".

The default switch back from secondary to primary can be overridden. If the overrides are in place to prevent the automatic switchback, a failure along the active secondary path or manual intervention will cause the secondary to switch back to the primary. Manual intervention requires an operator to use the comment command to remove the active secondary path statement from the configuration. This will immediately fail the path back to the primary. The command line in the configuration can be commented back in immediately following the act of commenting it out.

To override the automatic switchback function:

RS(config)# mpls set label-switched-path <name> no-switchback

The use of the comment command:

RS(config)# comment in|out <linenum>

Fast Reroute and Detour LSP

The Internet draft draft-gan-fast-reroute-00.txt defines a process that allows intermediate nodes along a main LSP to pre-establish detours around possible failure points. It introduces four new RSVP objects: Fast reroute: This is the trigger carried in the RSVP session information to indicate that the main LSP

requires a detour LSP to be pre-established across the components. This object includes all the necessary information to allow transit routers to execute the Carrier Pre-Select File (CPSF) process and select detours that meet the criteria of the main LSP. The object information includes setup and hold priorities, bandwidth, and link affinity (include and exclude). The constraint-based information is not required to be the same on the detour path as it is on the main LSP. If these are not specifically configured when fast reroute is configured, the constraints are inherited.

Detour: Includes the IP address of the router requesting the detour LSP and the destination that should be bypassed in the event of failure. The goal is to create the detour around the link and the immediate next hop.

Merge node: The node where the detour and the main path join. The merge node is responsible for mapping both the main LSP inbound label and the detour LSP inbound label to the same outbound action. On the RS platform this will mean that multiple related hw-ilm-tbl entries will share a common hw-ott-index, which determines the outbound action.

Branch node: The node where a detour LSP is instantiated to protect the main path.

The simple diagram in Figure 4.17 represents a main LSP protected by a detour LSP. If the main LSP traverses a more complex longer path, detour LSPs will be established around all possible failure points that have an alternate available. When a failure occurs, as in Figure 4.18, the traffic will take the predefined backup path.

88

Page 89: MPLS Training Guide Book

Figure 4.17: Path without a Failure

Figure 4.18: Path After Failure

An older Internet draft (draft-swallow-rsvp-bypass-label-01.txt, November 2000) defines a different approach to the fast reroute concept. With the older approach, the detour LSP uses label stacking to create the “bypass” LSP. Encapsulating the main LSP label within the label for the bypass LSP, the penultimate hop on the bypass LSP removes the top-level label, the bypass tunnel label, delivering the main LSP label back to the merge node.

There is a possibility these drafts may merge into a single specification.

89

Page 90: MPLS Training Guide Book

Cisco’s Tunnel Builder Vendor Solution

Despite their admirable factors, self-healing networks pose three distinct problems. The first is their difficulty to configure; another is their difficulty to manage and the difficulty in reserving required bandwidth. The MPLS workgroup and the IET addressed these problems and have developed several Management Information Bases (MIBs) and test procedures to confront them.

It took the innovation and foresight of Cisco to integrate these elements into a central management system product: Tunnel Builder.

In Figure 4.19, we see that most Cisco routers are managed through the command language interface (CLI) in a network with thousands of interfaces; this method could prove challenging and cumbersome (see Figure 4.20).

Figure 4.19: Typical Router Configuration

Figure 4.20: CLI Becomes Complex on Larger Networks

Through the use of Telnet and Simple Network Management Protocol (SNMP), Cisco built a centralized management station that runs on a Sun platform. The Sun workstation interfaces directly with the devices to be managed.

Engineers can access network data directly at the Sun platform or remotely through an HTTP interface (see Figures 4.21–4.25).

90

Page 91: MPLS Training Guide Book

Figure 4.21: The Cisco Tunnel Builder Solution

The software running to support this system is called Tunnel Builder. It provides a simple graphical user interface (GUI), which is easily understood and easily controllable. Figure 4.22 shows the Tunnel Builder block diagram.

Figure 4.22: Tunnel Builder Block Diagram

Tunnel Builder has a very user-friendly computer/human interface, as shown in Figures 4.23–4.25. The CHI shows the paths and failovers.

91

Page 92: MPLS Training Guide Book

Figure 4.23: Tunnel Builder Demo Setup, Part 1

Figure 4.24: Tunnel Builder Demo Setup, Part 2

Figure 4.25: Tunnel Builder Demo Setup, Part 3

92

Page 93: MPLS Training Guide Book

Tunnel Builder promises to be a tool that addresses many of the concerns about MPLS self-healing, configuration, maintenance, and traffic engineering. For more information on Tunnel Builder, visit the Cisco site at www.cisco.com/warp/public/732/Tech/mpls/tb/pres.shtml.

93

Page 94: MPLS Training Guide Book

Cisco’s Tunnel Builder Vendor Solution

Despite their admirable factors, self-healing networks pose three distinct problems. The first is their difficulty to configure; another is their difficulty to manage and the difficulty in reserving required bandwidth. The MPLS workgroup and the IET addressed these problems and have developed several Management Information Bases (MIBs) and test procedures to confront them.

It took the innovation and foresight of Cisco to integrate these elements into a central management system product: Tunnel Builder.

In Figure 4.19, we see that most Cisco routers are managed through the command language interface (CLI) in a network with thousands of interfaces; this method could prove challenging and cumbersome (see Figure 4.20).

Figure 4.19: Typical Router Configuration

Figure 4.20: CLI Becomes Complex on Larger Networks

Through the use of Telnet and Simple Network Management Protocol (SNMP), Cisco built a centralized management station that runs on a Sun platform. The Sun workstation interfaces directly with the devices to be managed.

Engineers can access network data directly at the Sun platform or remotely through an HTTP interface (see Figures 4.21–4.25).

94

Page 95: MPLS Training Guide Book

Figure 4.21: The Cisco Tunnel Builder Solution

The software running to support this system is called Tunnel Builder. It provides a simple graphical user interface (GUI), which is easily understood and easily controllable. Figure 4.22 shows the Tunnel Builder block diagram.

Figure 4.22: Tunnel Builder Block Diagram

Tunnel Builder has a very user-friendly computer/human interface, as shown in Figures 4.23–4.25. The CHI shows the paths and failovers.

95

Page 96: MPLS Training Guide Book

Figure 4.23: Tunnel Builder Demo Setup, Part 1

Figure 4.24: Tunnel Builder Demo Setup, Part 2

Figure 4.25: Tunnel Builder Demo Setup, Part 3

96

Page 97: MPLS Training Guide Book

Tunnel Builder promises to be a tool that addresses many of the concerns about MPLS self-healing, configuration, maintenance, and traffic engineering. For more information on Tunnel Builder, visit the Cisco site at www.cisco.com/warp/public/732/Tech/mpls/tb/pres.shtml.

97

Page 98: MPLS Training Guide Book

Chapter 5: MPLS Traffic Engineering

Introduction

Having explored much in the way of MPLS fundamentals, we can now take a look at one of the greatest challenges confronting network analysts today: traffic engineering. Effective traffic engineering employs a variety of skills, tactics, and methodologies; we explore some of them in this chapter. Volumes could be devoted the science of managing network traffic, but we skim the surface here to gain an understanding of how it applies specifically to MPLS in terms of measurements, procedures, and protocols involved. Practical applications and hands-on exercises are included to supplement the chapter material.

98

Page 99: MPLS Training Guide Book

Introduction to MPLS Traffic Engineering

There is a road in Seattle, Washington, that I drove years ago, called Interstate 5. From the suburb of Lynnwood, I could get on the highway and drive into the city, getting off at any exit. If I wanted to go from Lynnwood into the heart of Seattle, I could get onto the express lane. This express lane is like an MPLS tunnel (see Figure 5.1). If my driving characteristics matched the requirements of the express lane, I could use it.

Figure 5.1: Express Lane

Taking this illustration further, let’s say that I enter the freeway and want to drive into the heart of Seattle. I might ask myself, “Which is faster: the express lane or the regular highway? Is there an accident on the express lane? Is the standard freeway faster?”

It would be nice to have a traffic report to use in making my decision, but traffic reports over the radio are not given in real time; by the time that I find out about a slowdown, I will be stuck in it. I could make the mistake of entering the express lane just as an accident happens 5 miles ahead, and I’d be trapped for hours.

It would be great if I had a police escort. The police would drive in front of me; in the event of an accident or a slowdown, they would take me on a detour of similar quality to ensure a timely arrival at my destination.

On the Internet, we have thousands of data roads that are just like Interstate 5. With MPLS, we have a road that is dedicated to traffic with certain characteristics—much like the express lane. To ensure that the express lane is available and free of congestion, we can use protocols like CR-LDP and RSVP-TE. Currently, the most popular of these two protocols appears to be RSVP-TE because it acts like a police escort to ensure that any incoming traffic can be rerouted around the problem area.

When looking at traffic patterns around the country, we see that freeways often experience congestion and delays, whereas other roads are open and allow traffic to flow freely. The traffic is simply in the wrong area. Wouldn’t it be nice if the highway engineers and the city planners could find ways to route heavy traffic to a road that could handle the traffic load and adjust the road capacity as needed to accommodate traffic volume? This is the goal in traffic engineering with MPLS.

99

Page 100: MPLS Training Guide Book

Aspects of Traffic Engineering

In data and voice networks, traffic engineering functions in order to direct traffic to the available resources. If achieving a smooth-flowing network by moving traffic around were a simple process, our networks would never experience slowdowns or rush hours.

On the Internet (as with highways), four steps must be undertaken to achieve traffic engineering: measuring, characterizing, modeling, and moving traffic to its desired location (see Figure 5.2).

Figure 5.2: Four Aspects of Traffic Engineering Measuring traffic is a process of collecting network metrics, such as the number of packets, the size of

packets, packets traveling during the peak busy hour, traffic trends, applications most used, and performance data (i.e., downloading and processing speeds).

Characterizing traffic is a process that breaks raw data into different categories so that it can be statistically modeled. Here, the data that is gathered in the measurement stage is sorted and categorized.

Modeling traffic is a process of using all the traffic characteristics and the statistically analyzed traffic to derive repeatable formulas and algorithms from the data. When traffic has been mathematically modeled, different scenarios can be run against the traffic patterns—for instance, “What happens if voice/streaming traffic grows 2 percent per month for four months?” Once traffic is correctly modeled, simulation software can be used to look at traffic under differing conditions.

Putting traffic where you want it is an essential component of traffic engineering. To measure, characterize, and model traffic for the entire Internet is an immense task that would require resources far in excess of those at our disposal. Before MPLS was implemented, engineers had to understand the characteristics and the traffic models of the entire Internet in order to perform traffic engineering.

Articles and white papers tend to focus on only one aspect of MPLS traffic engineering. For example, you may read an article about traffic engineering that addresses only signaling protocols or one that just talks about modeling; however, in order to perform true traffic engineering, all four aspects must be thoroughly considered.

With the advent of MPLS, we no longer have to worry about the traffic on all the highways in the world. We don’t even have to worry about the traffic on Interstate 5. We need only be concerned about the traffic in our express lane—our MPLS tunnel. If we create several tunnels, we need to engineer the traffic for each tunnel.

100

Page 101: MPLS Training Guide Book

Provisioning and Subscribing

Before looking at the simplified math processes for engineering traffic in an MPLS tunnel, we need a brief discussion of bandwidth provisioning and subscribing.

First, let’s look at the definitions of pertinent terms. Over-provisioning is the engineering process in which there are greater bandwidth resources than there is network demand. Under-provisioning is the engineering process in which there is greater demand than there are available resources. Provisioning is a term typically used in datacom language.

In telecom language, the term subscribe is used instead of provision. Over-subscribing is having more demand than bandwidth; under-subscribing is having more bandwidth than demand. It is important to note that provisioning terms and subscription terms refer to opposite circumstances.

A pipe/path/circuit that has a defined bandwidth (e.g., a Cat-5 cable) can in theory process 100Mb/s, while an OC-12 can process 622Mb/s. These are bits crossing the pipe, comprising all overhead and payload bits.

In order to determine the data throughput at any given stage of transmission, you can measure the data traveling through a pipe with relative accuracy using networking measurement tools. Using an alternate measurement method, you can calculate necessary bandwidth by calculating the total payload bits-per-second (bps) rate and adding the rate of overhead bps; this second method is more difficult to calculate and less accurate than actually measuring the pipe.

If the OC-12, which is designed to handle 622Mb/s, is fully provisioned and the traffic placed on the circuit is less than 622Mb/s, it is said to be over-provisioned. By over-provisioning a circuit, true QoS has a better chance of becoming a reality; however, the per-performance cost is significantly higher.

If the traffic that is placed on the OC-12 is greater than 622Mb/s, it is said to be under-provisioned. For example, commercial airlines under-provision as a matter of course because they calculate that 10 to 15 percent of their customers will not show up for a flight. By under-provisioning, the airlines are assured of full flights; they run into problems, however, when all the booked passengers show up for a flight. The same is true for network engineering: If a path is under-provisioned, there is a probability that there will arise the problem of too much traffic. The advantage of under-provisioning is a significant cost savings; the disadvantages are loss of QoS and reliability.

In Figure 5.3, you can see that you can over- or under-provision a circuit in percentages related to the designed bandwidth.

101

Page 102: MPLS Training Guide Book

Figure 5.3: Over-Provisioning vs. Under-Provisioning

Figure 5.4 illustrates that, as we over-provision, QoS increases, but so does cost. If you under-provision, QoS and reliability decrease, and so does the cost.

Figure 5.4: Comparison of Over-Provisioning and Under-Provisioning

102

Page 103: MPLS Training Guide Book

Calculating How Much Bandwidth You Need

For the sake of discussion in these examples, let’s assume that you know the characteristics of your network. This is a process of gathering data that is unique to your situation and has been measured by your team.

Example 1: Two Tunnels with Load-Balanced OC-12 Designed for Peak Busy Hour

Let’s say that we want to engineer traffic for an OC-12 pipe, which is 622Mbps (see Figure 5.5). You want to have rapid recovery, so you use two pipes and load balance each pipe for 45 percent of capacity. In this case, if one OC-12 pipe fails (see Figure 5.6), your rapid recovery protocol can move traffic from your under-provisioned pipe to the other, and the total utilization is still under-provisioned.

Figure 5.5: Sample Network Diagram: Example 1

103

Page 104: MPLS Training Guide Book

Figure 5.6: Sample Network Failure

Our traffic trends for peak busy hour show that we have the calculations shown in Figure 5.7.

Figure 5.7: Traffic Trends

We can work these numbers just like we would in a checkbook. After we do the math, if we still have money (bits) remaining, we are okay. If our checkbook comes out in the red, we must go back and budget our spending.

Table 5.1 helps to simplify the bandwidth budgeting process as well as to demonstrate some of the calculations involved in traffic engineering. Table 5.1: Traffic Engineering Calculations for Example 1

Traffic Totals/Demands Quantity Totals and Subtotals Notes

Number of voice calls 200    

  b/s/call 100,000    

Total voice streams in b/s 20,000,000 20,000,000  

Number of video calls 3    

  b/s/call 500,000    

104

Page 105: MPLS Training Guide Book

Table 5.1: Traffic Engineering Calculations for Example 1

Traffic Totals/Demands Quantity Totals and Subtotals Notes

Total video streams in b/s 1,500,000 1,500,000  

Committed information rate 250,000,000 250,000,000  

Other traffic 0 0  

Total traffic demand   271,500,000 BW needed

Usable Bandwidth

Circuit BW OC-12 622,000,000    

Percentage used 45%   Over-provisioned

Total BW for over-provisioned

279,900,000 279,900,000 BW on hand

    271,500,000 BW needed

Remaining BW   8,400,000 Good BW remaining

Now that we understand the basic concept, let’s play with the figures a bit to achieve the outcomes that we need.

Example 2: Calls with Silence Suppression

First, let’s say that we are going to use “silence suppression” on the voice calls. The use of silence suppression means that we will not use bandwidth if we are not transmitting. You can see the effects of silence suppression in Figure 5.8, which is a simple 10 count over 10 seconds.

Figure 5.8: Voice with Silence Suppression

The lows in the graph indicate the periods in which no data is being sent. Silence suppression can be used if the calls have the characteristics of phone calls (see Figure 5.8). However, if the calls are streaming voice, like a radio show, or piped-in music (as shown in Figure 5.9), notice that the baseline is higher and that more overall bandwidth is used.

105

Page 106: MPLS Training Guide Book

Figure 5.9: Vocal Jazz Music (The Andrews Sisters Singing “Boogie-Woogie Bugle Boy”)

We can reduce the number of bits required for voice calls down to100K using silence suppression. Notice in Table 5.2 that we have more remaining bandwidth with which to work. Table 5.2: Traffic Engineering Calculations for Example 2

Traffic Totals/Demands Quantity Totals and Subtotals Notes

Number of voice calls 100    

  b/s/call 100,000    

Total voice streams in b/s 10,000,000 10,000,000  

Number of video calls 3    

  b/s/call 500,000    

Total video streams in b/s 1,500,000 1,500,000  

Committed information rate 250,000,000 250,000,000  

Other traffic 0 0  

Total traffic demand   261,500,000 BW needed

Usable Bandwidth

Circuit BW OC-12 622,000,000    

Percentage used 45%   Over-provisioned

Total BW for over-provisioned

279,900,000 279,900,000 BW on hand

    261,500,000 BW needed

Remaining BW   18,400,000 Good BW remaining

Example 3: Over-Provisioned by 110 Percent

Many carriers choose over-provisioning because they cannot afford the cost of designing a highway system for rush-hour traffic. Instead, they design a network for “normal” traffic. Over-provisioning a network is similar to the

106

Page 107: MPLS Training Guide Book

airlines overbooking flights. There is a statistical point at which possible loss of customers costs less than running planes at half capacity.

Let’s use the same example as we used earlier, with no switchable path and an OC-12 pipe that is able to tolerate some congestion during rush hour. We choose not to design the tunnel for peak busy-hour traffic; instead, we design it for 10 percent over-provisioning, or 110 percent of the available bandwidth.

On paper, this looks great because we can still handle several hundred more calls, and it is an accountant’s dream. Trouble, however, lies in wait. What happens if all the traffic arrives at the same time? In addition, how can we handle a switchover to another link? If this link is provisioned at 110 percent and the spare link is provisioned at 110 percent, one link will have a 220 percent workload during a single link failure and is more than apt to fail itself (see Table 5.3).Table 5.3: Traffic Engineering Calculations for Example 3

Traffic Totals/Demands

Quantity Totals and Subtotals Notes

Number of voice calls 100    

  b/s/call 100,000    

Total voice streams in b/s

10,000,000 10,000,000  

Number of video calls 3    

  b/s/call 500,000    

Total video streams in b/s

1,500,000 1,500,000  

Committed information rate

250,000,000 250,000,000  

Other traffic 0 0  

Total traffic demand   261,500,000 BW needed

Usable Bandwidth

Circuit BW OC-12 622,000,000    

Percentage used 110%   Over-provisioned

Total BW for over-provisioned

684,200,000 684,200,000 BW on hand

    261,500,000 BW needed

Remaining BW   422,700,000 Good BW remaining

Checkpoint  Answer the following true/false questions.1. Measuring traffic is the process of collecting network metrics. 2. Characterizing traffic is a process of categorizing the data gathered in the

measurement stage of traffic engineering. 3. Modeling traffic is the process of making network traffic go where you want it to

go.4. Silence suppression is a method of network engineering wherein no bandwidth

will be used if there is no transmission. 5. Moving traffic in a fashion that best suits your needs is an essential component in

traffic engineering.

107

Page 108: MPLS Training Guide Book

Answers: 1. True; 2. true; 3. false; 4. true; 5. true.

Practical Applications: OPNET[1]

There are two parts to MPLS traffic engineering. The first part involves doing analysis, network discovery, and math. Second is putting traffic where you want it to be.

Network analysis can be a rather advanced subject. Some software designers have developed tools to make this job easier.

OPNET has designed a product line of software packages that will help your design and enhance your traffic planning. Cisco’s Tunnel Builder offers not only network discovery but also a method to guarantee bandwidth for high-priority circuits (e.g., voice).

This chapter has presented traffic engineering in an extremely simplified fashion for ease of learning. Mastering the art and science of traffic engineering—especially traffic planning and modeling—takes years of study, sophisticated algorithms, and specialized tools.

Throughout my travels, while teaching MPLS and interviewing networks designers, I have found that two key concerns of any network professional are traffic engineering and having the necessary tools to do the job right. Over the years, OPNET has earned a dominant presence in the network planning and design marketplace by developing products that combine ease of use with accurate algorithms. Tools such as OPNET SP Guru and its integrated support for both for Cisco and Juniper make OPNET a must-have tool for network designers.

As mentioned earlier, traffic engineering has four main aspects: measuring, characterizing, modeling, and directing the traffic to a desired location. OPNET MPLS solutions address three of these steps: measuring, characterizing, and modeling.

OPNET as a modeling tool is a twofold entity: dynamic and interactive. OPNET dynamically gathers data from operational networks. This data is compiled and processed to create a network model. From this operational model, planners can work with OPNET to perform “what-if?” analysis, traffic optimizing, peak busy-hour simulations, and other scenarios.

OPNET MPLS Capabilities and Features

OPNET offers an MPLS as part of its Specialized Model Library. Based on Internet standards and developed in collaboration with industry experts,[1] the OPNET MPLS model offers the most comprehensive and accurate performance predictions of networks that incorporate MPLS technology and traffic engineering policies.

108

Page 109: MPLS Training Guide Book

OPNET’s MPLS model has broad appeal to those responsible for the functions outlined in Figure 5.10.

Figure 5.10: OPNET MPLS Uses

Combined with OPNET’s core products, the OPNET MPLS model benefits MPLS networks by providing the ability to perform the services outlined in Figure 5.11.

Figure 5.11: OPNET Services Perform network component failure/recovery analysis: www.opnet.com/products/library/fail_recov.html.

Quantify the performance impact of failure of specific links and devices. Automate MPLS traffic engineering (TE): www.opnet.com/products/spguru/plan.html. OPNET

automatically defines LSP explicit routes that minimize maximum link utilizations under normal conditions and secondary routes that will survive link and node failures. OPNET identifies link capacity that is unused during any failure, highlighting opportunities to reduce network capacity without impacting service levels.

Perform traffic engineering (TE) analysis: www.opnet.com/services/training/network_analysis_design.html. Make network operations more efficient and reliable while optimizing performance and resource utilization. Virtual deployment of QoS - Differentiated Services (DiffServ): Gain insight as to how Quality of Service (QoS) can be implemented to meet a Service Level Agreements (SLA).

Design and validate new architectures: www.opnet.com/services/training/network_analysis_design.html. For the R&D community, open control plane model architecture offers an opportunity to validate new approaches and test interoperability.

Key features include those shown in Figure 5.12.

109

Page 110: MPLS Training Guide Book

Figure 5.12: Key Features of OPNET MPLS Model

In an operational model, we first gather data from a network. Data is processed and performance software generates a top-view performance map, as shown in Figure 5.13. This figure shows a nonoptimized network using standard routing protocols.

Figure 5.13: Nonoptimized Network

Through a process of network performance tuning using MPLS tunnels, we see in Figure 5.14 that OPNET generates a new network map and models showing several paths for traffic: a signaled LDP, a routed LDP, and an OSPF path.

110

Page 111: MPLS Training Guide Book

Figure 5.14: Optimized Network

Powerful networks require powerful tools for design, modeling, and testing. OPNET provides these programs in their suite of planning tools. For more information, contact OPNET Technologies, Inc., at www.OPNET.com or call (240) 497-3000  (240) 497-3000 .

[1]OPNET’s MPLS Model Development Consortium includes MPLS experts from Cisco, Ericsson, Hyperchip, Marconi, NEC, NTT, QoS Networks, and Qwest.

Exercise 5.1: Traffic Engineering

The customer has four classifications of traffic: real-time voice and video, VoIP, time-critical data traffic, and nontime-critical data traffic. The customer currently has three T1 lines that run at 1.555Mbs: The first T-1 has the best SLA with 9000 hours MTBF and an MTTR of 2 minutes. The POC is .0001. The second T-1 is the second grade of traffic with 6000 hours MTBF and an MTTR of 5 minutes. The

POC is .001. The third T-1 is the third grade of traffic with 4000 hours MTBF and an MTTR of 15 minutes. The POC

is .01.

Note: Use the related traffic engineering RFCs, Erlang tables, and necessary reference documents found on the Web.

Part 1: Voice

The customer has 200 voice circuits with a mean call duration of 10 minutes and peak traffic of 35 simultaneous calls. They require a POC of .001, with an MTBF of 7000 hours and an MTTR of less than 5 minutes.

1.  How many voice channels are required?

Answers

1.  BHT = Average call duration (s) * Calls per hour/3600

111

Page 112: MPLS Training Guide Book

10 minutes = 600 seconds 600 * 35 /3600 = 5.8 Erlangs

Use the Erlang B calculator www.erlang.com/calculator/erlb/

For POC .01 to determine that 9 lines are needed

Because of the MTBF the first T-1 is required. So 1⁄2 of the T1 would be needed for standard PCM coding.

Nine (9) voice channels are required.

Part 2: Video

The video used for videoconferencing uses one voice channel for each video channel. For one videoconference, two channels are required. The company expects not to exceed the need for 5 simultaneous videoconferences with an average duration of 1.5 hours.

The company requires a POC of .01, with an MTBF of 700 hours and an MTTR of less than 10 minutes.

1.  How many video channels are required?

2.  How much bandwidth is required?

Answers

1.  Video conference = 1 channel for voice and 1 for Video 2 channels required for each conference

Up to 5 conferences so the calculation is 5 * 2 voice channels

That is (5* 2) * 64K or 640K

Only five (5) video channels are required; however, in this case we combine voice with video for a total of two (2) channels per videoconference for a total of ten (10) channels.

2.  10* 64 K or 640K

Part 3: Streaming Data

Streaming data is used for real-time stock market reports, news feeds, and some limited VoIP. The data streams consist of a data stream of 5Kbps for the stock market for 8.5 hours a day (business hours). The news feed is 15Kbps, and the average VoIP calls are 16K, with no more than 10 VoIP calls connected simultaneously. The average call duration of the VoIP calls is 2 hours.

The company requires a POC of .01, with an MTBF of 6000 hours and an MTTR of less than 5 minutes.

1.  How much bandwidth is required?

Answers

112

Page 113: MPLS Training Guide Book

1.  5 k data stream + 15k news + 160k VoIP = 180K

Part 4: Time-Critical Data

The company uses time-critical data for critical business information. They connect using SAP, Oracle, and PeopleSoft software in their daily business requirements.

The sessions are relatively short, with an average duration of 5 minutes. The average amount of data sent per session is 250K. This is user data and does not include overhead.

There are 2000 user terminals, with the average call duration being 5 minutes. The average number of calls per terminal operational hours is five. (Assume an eight-hour day.)

The company requires a POC of .001, with an MTBF of 7000 hours and an MTTR of less than 5 minutes.

1.  What is the required bandwidth?

Answers

1.  In order to solve this problem, several factors must be calculated . Let’s say that the overhead is 20 percent, then the 250K per session becomes 300K over 5 minutes.

There are 2000 terminals at five (5) sessions per hour or 10,000 sessions per hour using 1K per session. The bandwidth required would be 10,000K.

Average call duration (s) X Calls per hour / 3,600 =? 300 X 10,000 /3,600 = 833 channels at 1K

Part 5: Nontime Critical Data

The company uses e-mail and Web browsers to conduct daily business. It has 14,000 employees, each with access to the Internet and e-mail accounts. This data is not necessarily time sensitive; however, it is critical to operations.

The average employee checks his or her e-mail four times a day for 10 minutes per time.

The average employee surfs the Web five times a day for 5 minutes at a time, loading less than 150Kbps of data per hour per active connection.

The company requires a POC of .05, with an MTBF of 2000 hours and an MTTR of less than 20 minutes.

1.  How many kilobytes of bandwidth are required to support the nontime-sensitive data?

Answers

1.  Assume that e-mails are primarily rapid loading and that most of the 10 minutes is used for the end users reading their e-mail. Since it is not stated, let’s assume that each e-mail takes 300K for one minute. The number of calls is 4 times a day or .5 times per workday hour.

14,000 * .5 * 60 seconds /3600 = 1.9

113

Page 114: MPLS Training Guide Book

Rounding up let’s allocate 2 channels of 300K for e-mail.

Surfing-the-web time does not directly translate to network use. Web pages are loaded within seconds while the users may spend several minutes reading the page.

Since, HTTP use may create less of a load than e-mail, let’s conservatively estimate 3 channels of 300K for HTTP.

They require a POC of .05, with an MTBF of 2000 hours and an MTTR of less than 20 minutes.

We have calculated best guess estimates; a better method would be to measure the bandwidth actually used for HTTP and e-mail.

Using this best guess estimate, we should reserve 5 channels of 300K for e-mail.

Part 6: Review Questions

Answer the following questions:

1.  What is the maximum bandwidth required to support all the customer’s requirements?

2.  How would you classify the traffic?

3.  Explain your provisioning and over-provisioning strategies for each classification of traffic.

4.  Describe how you would allocate the traffic to the various channels. Show your calculations.

Answers

1.  Voice (9 voice channels) round up to 10 channels 640K

Video conference traffic     640K

Data Streaming     5 K data stream + 15K news feed + 160K VoIP     180K

Real-time data 833K

Non-real time 5 channels at 300 K     15K

2.  Each T1 = 1.5 Megs or 1544K

1st T-1: Voice and video conference and data streaming will fit well in to the first T1.

2ndT-1: Real time data of 833K for the second T1.

3rdT-1: The non-real time data fits nicely into the third T1.

3.  Services are allocated to the T1 based upon traffic analysis, as well as the MTBF and MTRT requirements.

4.  Shown in the previous answers.

114

Page 115: MPLS Training Guide Book

Chapter Review and Summary

Traffic engineering for MPLS consists of four elements: measurement, characterization, modeling, and putting traffic where you want it to be. In performing traffic placement, MPLS can use either of the traffic-engineering protocols named in discussions about advanced signaling (CR-LDP or RSVP-TE). Of the two protocols, RSVP-TE appears to be more dominant, but it costs more in bandwidth; it is like paying for a police escort whenever and wherever you travel.

The rest of traffic engineering is far from simple. You must measure, characterize, and model the traffic that you want. Once you have the information that you need, you can then perform mathematical calculations to determine how much traffic can be placed on your tunnel.

The mathematical processes involved in engineering traffic are much like those involved in balancing a checkbook. You should never allow the balance of your available resources to go into the “red,” or negative, area.

The tradeoff decisions are difficult to make: Can you over-provision (over-book) your tunnel and just hope that rush-hour traffic never comes your way? In the event of a failure, where is the traffic going to go?

Knowledge Review 

Answer the following questions.1. Name the four elements of traffic engineering. 2. Name two models of bandwidth provisioning. 3. What is the primary advantage of over-provisioning a circuit? 4. What are the primary disadvantages of under-provisioning a circuit? 5. Preserving bandwidth when there are no active transmissions on a circuit is

called what? 6. Overbooking a circuit and assuming that “rush-hour” traffic will not happen

115

Page 116: MPLS Training Guide Book

is called what?

Answers: 1. Measuring, characterizing, modeling, and moving traffic; 2. over-provisioning and under-provisioning; 3. excess bandwidth increases chances of achieving true QoS; 4. lack of bandwidth negatively affects both QoS and reliability; 5. silence suppression; 6. over-provisioning by 110 percent.

Going Further

Traffic modeling: www.comsoc.org/ci/public/preview/roberts.html

Draft math RFC: www.ietf.org/internet-drafts/draft-kompella-tewg-bw-acct-00.txt

Lucent: http://google.yahoo.com/bin/query?p=modeling+internet+traffic&hc=0&hs=0

Traffic Engineering Work Group: www.ietf.org/html.charters/tewg-charter.html

Inside the Internet statistics: www.nlanr.net/NA/tutorial.html

Excellent measurement site: www.caida.org

Modeling and simulation software: http://NetCracker.com

QoS measurement systems: http://Shomiti.com

116

Page 117: MPLS Training Guide Book

Chapter 6: Introduction to MPS and GMPLS

Introduction

In this chapter, we’re going beyond the basic features of MPLS to the future of networking. To have one automatic network control structure is the dream of every carrier. The ability to make this dream come true has appeared in the form of a new protocol set that comprises the framework of Generalized Multiprotocol Label Switching (GMPLS). Here, we discuss the composition and capabilities of GMPLS, and we support the chapter content with examples, applications, hands-on exercises, and resource links.

117

Page 118: MPLS Training Guide Book

MPS and GMPLS

Do you remember the TV ads several years ago for a famous kitchen knife? It was not an ordinary knife—no, sir. This knife could slice, dice, and julienne. It could saw through a tin can and afterward still cut a tomato into paper-thin slices.

Like that famous knife, GMPLS is not ordinary MPLS. GMPLS discovers its neighbors, distributes link information, provides topology management, and provides path management, link protection, and recovery. But that’s not all! GMPLS packets fly through the network at nearly the speed of light.

In performing these functions, GMPLS can help us achieve the pinnacle of networking (see Figure 6.1). GMPLS allows for centralized control, automatic provisioning, load balancing, provisioned bandwidth service, bandwidth on demand, and the presence of an optical virtual private network (OVPN).

118

Page 119: MPLS Training Guide Book

Figure 6.1: GMPLS Advantages

Let’s look at what led up to the creation of this super MPLS protocol: GMPLS.

In the beginning, there was one network—the telecom network. Much later, datacom and the Internet came along. The telecommunications world was divided into two different and distant parts: the datacom world and the telecom world. Datacom was primarily concerned with nonreal-time performance; the telecom/voice communications network was concerned with real-time performance.

Where Networking Is Today

For years now, the datacom and the telecom networks have existed in different worlds. Having different objectives and customer bases, each discipline has formed its own language, procedures, and standards. Placing data on a telecom network was a challenging and often difficult task. Placing datacom traffic onto a voice network required encapsulating several layers.

In Figure 6.2, we see data traffic that has been stacked on top of an ATM layer. It has to be stacked again on the SONET network to allow for sending. Additional stacking takes place in order to ensure compatibility between this traffic and an optical DWDM network.

119

Page 120: MPLS Training Guide Book

Figure 6.2: Data, ATM, SONET, and DWDM

Notice that each layer has its own management and control. This method of passing data onto a telecom network is both inefficient and costly. Interfacing between layers requires manual provisioning; different types of service providers manage each layer separately. Reducing the number of interface layers promises to both reduce overall operational cost and improve packet efficiency. GMPLS concepts promise to fulfill the aspiration of having one interface and one centralized automatic control.

As the world of telecom marches toward its goal of an all-optical network, we find that the required paths of data packets are varied; they must pass across several different types of networks before being carried by an optical network. These network types, which have been defined in several draft RFCs, include packet switch networks, Layer 2 switch networks, Lambda switch networks, and fiber switch networks (see Figure 6.3).

Figure 6.3: Network Types

120

Page 121: MPLS Training Guide Book

Where Networking Is Going

In Figure 6.4, we see the promise of GMPLS. Figure 6.4a represents our current position in the datacom-to-optical network interface. Data from routers goes to ATM switches. The ATM switches connect to SONET switches, and the SONET switches connect to DWDM networks. As the network migrates, we will find that layers of this stack begin to disappear—first with the elimination of ATM via MPLS, then to SONET for Thin SONET with GMPLS, and finally to packet over DWDM with switching (d in Figure 6.4).

Figure 6.4: The Promise of GMPLS

121

Page 122: MPLS Training Guide Book

The Birth of GMPLS

MPLS researchers proved that a label could map to a color in a spectrum and that MPLS packets could be linked directly to an optical network. They called this process MPS or MPLambdaS (see Figure 6.5). As research continued, it was found that, in order to have a truly dynamic network, we required a method for complete control of a network within the optical core. Thus, the concept of intelligent optical networking was born.

Figure 6.5: MPS

Since MPLS offered network switching, and since provisioning could be accomplished automatically in MPLS, this feature could be carried on to the telecom networks, and switches could be provisioned using an MPLS switch as a core. However, since MPLS was specific to IP networks, the protocols would have to be modified in order to talk to the telecom network equipment. The generalizing of the MPLS protocol led to the birth of GMPLS. The protocol suite formerly known as MPLambdaS had become the grandfather, so to speak, of GMPLS.

In Figure 6.6, we see a GMPLS network with IP protocol running from end to end, MPLS protocol running from edge router to edge router, and GMPLS running in the middle of the network. Accomplishing the task of controlling the core networks is no simple feat. It requires the development of several different interfaces and protocols—in fact, GMPLS is not just one protocol, it’s a collection several different standards written by different standards-making bodies in order to accomplish one goal.

Figure 6.6: GMPLS, MPLS, and IP

122

Page 123: MPLS Training Guide Book

Adding a bit more detail to the drawing, we find that the ATM interface is called User Network Interface, or UNI; the SONET interface is called Optical User Network Interface, or O-UNI, and the DWDM interface can be called Link Management Protocol, or LMP, as shown in Figure 6.7.

Figure 6.7: Network with Interfaces Added

The GMPLS Control Plane

To control components outside the standard data packet, a separate control plane was developed for GMPLS. This control plane is the true magic of GMPLS. It allows for the total control of network devices.

The GMPLS control plane provides for six top-level functions: Discovery of neighborhood resources Dissemination of link status Topology link-state management Path management and control Link management Link protection

Neighbor Discovery

The link manager, all switches, and all multiplexers must know every component of a network—all routers and all active components. GMPLS uses a new protocol, LMP, to discover these devices and to negotiate functions (see Figure 6.8).

123

Page 124: MPLS Training Guide Book

Figure 6.8: Neighbor Discovery

Dissemination of Link Status

It does no good just to know what hardware is out there if the link is down or having problems. To disseminate this information, a routing protocol must be used. For GMPLS, both the OSPF and the IS-IS protocols are being modified to support this function (see Figure 6.9).

Figure 6.9: Link Status Distribution

Typology State Management

Link-state routing protocols, such as OSPF and IS-IS, can be used to control and manage the link state typology (see Figure 6.10).

124

Page 125: MPLS Training Guide Book

Figure 6.10: Topology Information

Path Management

In previous chapters, we determined that MPLS can use RSVP to establish a link from end to end. However, if MPLS data traverses telecom networks, other protocols such as UNI, PNNI, or SS7 must be implemented. Path management can be a real challenge because several standards organizations are involved in the process. Currently, the IETF is working on modifications to RSVP and Label Distribution Protocol (LDP) to extend those protocols to allow for GMPLS path management and control (see Figure 6.11).

Figure 6.11: Path and Link Management Control

Link Management

In MPLS, the LSP is used to establish and tear down links and aggregate links. In GMPLS, the ability to establish and aggregate optical channels is required. LMP extends the MPLS functions into an optical plane where link building improves scalability (see Figure 6.12).

125

Page 126: MPLS Training Guide Book

Figure 6.12: Link Management

Protection and Recovery

Intelligent optical networking allows inflexible optical networks to interact with each other. With GMPLS, instead of having one ring with a backup ring for protection, the network creates a true mesh that allows for several different paths (see Figure 6.13). Optical networking can go from a one-to-one protection method to a one-to-many protection method.

Figure 6.13: SONET Matrix Checkpoint  Answer the following true/false questions.

1. Link Management Protocol (LMP) extends MPLS functions into an optical plane. 2. GMPLS could use OSPF and IS-IS for link-state typology purposes. 3. RSVP and LDP are being modified by the IETF for GMPLS path management. 4. GMPLS was developed independently of all other MPLS protocols.

Answers: 1. True; 2. true; 3. true; 4. false.

126

Page 127: MPLS Training Guide Book

Chapter Summary and Review

Is That All There Is?

Although the advent of the control plane is a major advance in networking, the concept and power behind GMPLS are by no means “all there is.” Several protocols are under review, and more new protocols are to be written. The Optical User Interface (OPI) must be developed and tested further, as must LMP. The challenge for the future will be to get all the protocols and interfaces developed and tested.

The Future

GMPLS will further extend the reach of MPLS via the control plane, allowing it to reach into other networks and provide for centralized control and management of these networks. It will bring greater flexibility to somewhat rigid optical networks and provide carriers with centralized management and control. Provisioning of network resources, which (as of this writing) is still done manually, will soon be automated through GMPLS.

Who Are the Players?

The list of participants reads like a “Who’s Who” in telecom and datacom networking combined. A short list can be obtained from the referenced Internet drafts; however, this list is only a partial one because it includes neither the contributors in the ITU nor those from other associations and working groups.

For your convenience, here is a short list of some of the major players in GMPLS:

Accelight Networks Inc.: www.accelight.com

Alcatel: www.alcatel.com

127

Page 128: MPLS Training Guide Book

AT&T: www.att.com

Axiowave: www.axiowave.com

Calient Networks Inc.: www.calient.net/

Ciena Corp.: www.ciena.com

Cisco Systems Inc.: www.cisco.com

Cisco: www.cisco.com

Edgeflow: www.metanoia.com

Juniper: www.juniper.net/

Metanoia: www.matanoia.com

Movaz Networks Inc.: www.movaz.com

Nayna: www.nayna.com

NetPlane Systems Inc.: www.netplane.com/

Nortel Networks Corp.: www.nortelnetworks.com

Polaris Networks: www.polarisnetworks.com

QOptics Inc.: www.q-optics.com

Sycamore Networks Inc.: www.sycamorenet.com

Tellium Inc.: www.tellium.com

Turin: www.turinnetworks.com

Zaffire: www.zaffire.com

Standards

In order to accomplish the goal of GMPLS, several standard organizations must get together. The Sub-IP group (www.ietf.org/html.charters/wg-dir.html) of the IETF has formed several working groups who collectively (and diligently) have written 37 draft GMPLS Standards (visit http://search.ietf.org/search/cgi-bin/BrokerQuery.pl.cgi?broker=internet-drafts&query=gmpls&caseflag=on&wordflag=off&errorflag=0&maxlineflag=50&maxresultflag=1000&descflag=on&sort=by-NML&verbose=on&maxobjflag=75). The working groups are known as Common Control And Management Plane Working Group (CCAMP); Internet Traffic Engineering, IP over Optical (TEWG); General Switch Management Protocol (GSMP); IP over Resilient Packet Ring (IPORPR); and Multi-Protocol Label Switching (MPLS).

The International Telecommunications Union (ITU; www.itu.int/ITU-T/studygroups/com15/aap/table-sg15aap.html) is addressing several standards and recommendations, including G.705, G.707, G.709, G.7713/Y.1704, G.7714/Y.1705, G.7712/Y.1703, G.783, G.8030, G.8050, G.871, G.872, G.8070, G.8080, G.959.1.

These are only of the few documents that will support GMPLS. In addition to these documents, several manufacturers are producing their own proposals and recommendations.

128

Page 129: MPLS Training Guide Book

Does that mean that GMPLS will never get off the ground? Not at all. With the endorsements of Optical Domain Service Interconnect Coalition (ODSI; www.odsi-coalition.com/) and the Optical Internetworking Forum (OIF; www.oiforum.com/), it is off to a great start.

The benefits are great. These advances in networking mean savings for carriers. With GMPLS, the two separate paths of datacom and telecom have converged.

Knowledge Review 

Answer the following questions.1. What are the six top-level functions of GMPLS? 2. What does reducing the number of interface layers do in a network?3. What does MPS allow in an optical network?4. Which protocols are being modified to support link-status dissemination in

GMPLS network?5. What protocol extends the function of MPLS into an optical plane?

Answers: 1. Discovery of neighborhood resources, dissemination of link status, topology link-state management, path management and control, link management, and link protection; 2. reduces overall operational cost and improves packet efficiency; 3. allows for the designation of a color in a spectrum to a MPLS packet that can then be linked directly to an optical network; 4. OSPF and IS-IS; 5. Link Management Protocol.

Going Further

MPLSRC.COM: www.mplsrc.com/articles.shtml (see GMPLS)

IETF GMPLS architecture: http://search.ietf.org/internet-drafts/draft-ietf-ccamp-gmpls-architecture-01.txt

IETF GMPLS framework: http://search.ietf.org/internet-drafts/draft-many-ccamp-gmpls-framework-00.txt

Optical Signaling Systems by Scott Clavenna: www.lightreading.com/document.asp?site=lightreading&doc_id=7098&page_number=1

Vinay Ravuri: www.gmpls.org/

129

Page 130: MPLS Training Guide Book

Chapter 7: Virtual Private Networks and MPLS

Introduction

In this chapter, we will present an overview of Virtual Private Networks (VPNs), explore MPLS VPN concepts, discuss MPLS L2-L3 VPNs and their configurations, and broaden your working knowledge by providing a practical case study and a VPN cost justification case study.

130

Page 131: MPLS Training Guide Book

Introduction to Virtual Private Networks

You start a used bookstore called Rag-Tag Books. To control inventory and sales, you establish an internal network with internal IP addresses. As time goes by, your little bookstore expands, first past its existing walls, and then to a remote store within the same town. To connect the computers between the stores, you secure private dedicated data lines. You still have a private network.

As Rag-Tag Books becomes more and more popular, you find that you are opening branches all over town. Your store-to-store network becomes very complex, with dedicated private lines between all of your buildings in what is known as a “mesh network” (see Figure 7.1).

Figure 7.1: Mesh VPN Network

131

Page 132: MPLS Training Guide Book

You need the privacy that dedicated lines give you, but the expense of private lines becomes a problem. You decide to look into a Virtual Private Network (VPN). A VPN is a network of computers that uses public carriers to interconnect the individual computers (see Figure 7.2).

Figure 7.2: VPN Using a Service Provider

Virtual Private Networks

In days gone by companies would establish networks between their sites. They would own, manage, and control every aspect of these private networks. The ownership of their networks ensured a high level of security. As firms moved toward becoming more cost-conscious, they started to outsource network ownership and operations; however, the reduced operational costs came with increased security risk. Firms found that their data was exposed to being diverted, copied, read, and even played back.

To overcome these risks, the virtual private network (VPN) was born. A VPN is a network with the characteristics of a private network that offers protection from data modification, data interception or disclosure, and denial of service while operating on a public network (see Figure 7.3).

Figure 7.3: VPN Requirements

In order to build a secure VPN, firms with strict security requirements need to acquire special software and hardware that builds secure paths or tunnels from end to end. The software and hardware should allow for end-to-

132

Page 133: MPLS Training Guide Book

end data encryption; however, end-to-end encryption and security bring their own challenges and management issues (see Figure 7.4).

Figure 7.4: End-to-End VPN Using a Secure Tunnel

In a VPN, data is tunneled through a network. A tunnel is nothing more than an additional encapsulation process. The purpose of the tunnel is to forward the encapsulated traffic without needing to read the contents of the packets. In Figure 7.5, the basic VPN tunnel drawing illustrates the following steps:

Figure 7.5: Types of VPN Tunnels 1. The system on the left side (ingress) will encrypt data (when required and able to do so) and encapsulate

data into a tunnel.2. The tunnel will carry the data within an encapsulated packet. In the case of the drawing, the data is

marked as “????.” It might be encrypted depending on the type of tunnel selected and the security needs of the organization.

3. At the far right side (egress), the system removes the data from the tunnel and removes the encryption (if the packet was encrypted), delivering data back to the end user.

Regardless of which tunnel protocol is used, we can simplify the understanding of VPNs by knowing that VPNs establish some type of tunnel from end to end. The characteristics of the tunnels that are used will determine operations and security features of the VPN.

133

Page 134: MPLS Training Guide Book

There are several different tunneling options available that offer encryption or non-encryption. Some examples of VPN tunnels are Generic Routing Encapsulation (GRE), IP to IP (IP-IP), IP Security (IPSec), Layer-2 Tunnel Protocol (L2TP) and MPLS.

Generic Routing Encapsulation and IP-IP

These tunnel protocols are used strictly for encapsulation. They are not encryption tunnels so they offer very limited security. The GRE tunnel provides limited authentication while the IP-IP tunnel provides no authentication.

The GRE header is 16 words consisting of flags, routing, acknowledgement, and source sequence numbers. Figure 7.6a shows the GRE header and Figure 7.6b shows an IP datagram encapsulated in GRE tunnel. For a detailed explanation of the fields contained within the GRE header, see RFCs 1701 and 2890.

Figure 7.6: GRE Header and GRE Packet

The IP-IP tunnel is encapsulates IP packets in an additional IP header. It does not perform authentication or encryption (see Figure 7.7).

Figure 7.7: IP-IP Header

134

Page 135: MPLS Training Guide Book

IP Security (IPSec)

The IPSec protocol was developed as a security standard for Internet traffic. It can operate in two modes: tunnel (see Figure 7.8a) and transport (see Figure 7.8b). In the tunnel mode, data encryption is not provided; however, in the tunnel mode data encryption is provided.

Figure 7.8: IPSec Frame

IPSec tunnel-mode tunnels have lower overhead and higher performance compared to running IPSec on tunnels created some other way; however, they are usable only for IP.

IPSec in transport mode offers a very powerful encryption algorithm to encapsulate IP data. In IPSec transport mode, endpoint addresses are encrypted. For a detailed explanation of IPSec, refer to RFC 2401.

Layer-2 Tunnel Protocol, Version 3 (L2TPv3)

The Layer-2 tunnel protocol (L2TP) has several versions; the version currently in use is version 3 (L2TPv3).

This protocol tunnels Layer-2 information across Layer-3 networks. L2TPv3 does not have built-in security, but can work with other VPN protocols, including IPSec.

As shown in Figure 7.9, the L2TPv3 header consists of a session ID and a cookie. Details on the L2TP can be found at RFC 2661.

135

Page 136: MPLS Training Guide Book

Figure 7.9: L2TPv3

MPLS Tunnels

MPLS is a tunneling and switching technology. The switched tunnels that MPLS offers provide for multiplexing or aggregation, QoS assignment, and traffic engineering. MPLS tunnels can be used standalone or in conjunction with the previously covered VPN tunnel protocols. There has been wide acceptance of MPLS with GRE tunnels.

MPLS offers a variety of VPN services that will be covered in greater detail further into the book. MPLS offers both end-to-end and edge-to-edge tunnels. Figure 7.10 illustrates MPLS with an end-to-end tunnel.

Figure 7.10: MPLS Tunnel and Frame

Now that we have briefly discussed several prominent tunnel methods, let’s look at the features of each in relationship to the other. Table 7.1 gives an overview of the tunnel protocols. Reviewing the table, we see that if it full security is needed, then IPSec might be the tunnel of choice. However, if multiplexing is needed, then L2TPv3 might be an alternative tunnel of choice.Table 7.1: Comparison of Tunnel Protocols

Features GRE IP-IP IPSec L2TP3 MPLS

Encryption N N In transport mode only

N N

136

Page 137: MPLS Training Guide Book

Table 7.1: Comparison of Tunnel Protocols

Features GRE IP-IP IPSec L2TP3 MPLS

Authentication Y N Y N N

Multiplexing Y N N Y Y

QoS N N N Y Y

VPN Models

The Overlay Model

In Figure 7.11, we see an example of what is called the customer-equipment-to-customer-equipment model (CE to CE); it is also referred to as the overlay model. IP VPN traffic is overlaid onto end-to-end tunnels. Frame Relay (FR) and ATM services are two examples of the overlay model. The IP protocol is tunneled from CE to CE (or overlaid) on top of Layer-2 carriers, where these carriers maintain virtual backbones for the VPNs. In Figure 7.11, we see how customer sites 1, 2, 3, and 4 (Blue) are connected via tunnels. The data is encapsulated so that the IP data is not exposed across the networks.

Figure 7.11: VPN Overlay Model

In Figure 7.12, we see that if we add an additional customer (Bold) that a level of complexity is added. Configuration engineers and network managers must keep the traffic of the Blue customer separate from the traffic of the Bold customer and vice versa.

137

Page 138: MPLS Training Guide Book

Figure 7.12: Hub-and-Spoke Configuration with 4 Sites (Original Configuration)

The overlay model can offer the ultimate in security, but it is not without its challenges: A company has two choices when using this option: to manage and maintain its own tunnels or to allow

its service provider to manage its tunnels for them. In either case there is a cost for maintaining the tunnels and encryption keys.

As the number of sites grows within the network grows, the complexity of hardware and software increases, which in turn increases the cost of maintenance and configuration.

Hardware and capital expenditures are also an issue. For a customer with n-sites, the number of routers required is n-1. When adds, moves and changes are made to the configuration, each site must be reconfigured.

Let’s looks at two examples of how Site 5 can be added. For a hub and spoke design, we change Site 1 and Site 5. Using the example in Figures 7.12–7.14, we are going to add Site 5 to the configuration. Currently the table is as follows:

Site 1 talks to Sites 2, 3, 4 (Blue only)

Site 2 talks to Site 1 (Blue only)

Site 3 talks to Site 1 (Blue only)

Site 4 talks to Site 1 (Blue only)

138

Page 139: MPLS Training Guide Book

Figure 7.13: Adding Site 5 with a Hub and Spoke Design

Figure 7.14: Adding Site 5 with a Fully Meshed Network

The following illustrates the modifications that are required for adding Site 5:

Add Site 5 (Blue)

Site 1 talks to Sites 2, 3, 4, 5 (Blue only)

Site 5 talks to Site 1 (Blue only)

Notice that with a full matrix configuration the complexity grows.

The following is the configuration before Site 5 is added.

Site 1 talks to Sites 2, 3, 4 (Blue only)

Site 2 talks to Sites 1, 3, 4 (Blue only)

Site 3 talks to Sites 1, 2, 4 (Blue only)

Site 4 talks to Sites 1, 2, 3 (Blue only)

139

Page 140: MPLS Training Guide Book

All of the following configuration at all sites must be modified in order to communicate with the new Site 5.

Add Site 5 (Blue)

Site 1 talks to Sites 2, 3, 4, 5 (Blue only)

Site 2 talks to Sites 1, 3, 4, 5 (Blue only)

Site 3 talks to Sites 1, 2, 4, 5 (Blue only)

Site 4 talks to Sites 1, 2, 3, 5 (Blue only)

Site 5 talks to Sites 1, 2, 3, 4 (Blue only)

VPN Peer Model (MPLS)

Because of the cost of ownership and complexity of implementation, many firms choose to trust their carrier with maintaining data integrity. These networks become known as “trusted VPNs;” networks with encryption are known as “secure VPNs.” In trusted VPNs, the sites are linked to the provider edge equipment (PE) via dedicated links or leased lines and tunnels are built from PE to PE (see Figure 7.15). The peer model is also called router adjacency.

140

Page 141: MPLS Training Guide Book

Figure 7.15: Peer Model

In the datacom marketplace, IP traffic is the primary network transport. Carriers wishing to provide IP VPNs offered Frame Relay (FR) and ATM. These solutions are good, but often are not scaleable or cost effective enough to provide viable solutions for many businesses.

MPLS VPN Topologies

As MPLS developed, it became apparent that MPLS VPNs could provide a flexible VPN solution to service providers and ISPs alike. In order to meet these needs, the Layer-3 MPLS VPN was developed.

MPLS VPN topology and design has developed rapidly. Over the past few years, MPLS VPNs have grown from Layer-3 VPNs into variety of options, including any-to-any protocols, such as Cisco’s AToM (Any Transport over

141

Page 142: MPLS Training Guide Book

MPLS). The proliferation of VPNs at both Layer-2 and Layer-3 has led to much confusion. Figure 7.16 represents a top-layer drawing of the VPN tree showing the relationship of Layer-2 and Layer-3 VPNs. The drawing shows current relationships at the time of printing; however, it should continue to serve as a top-level view of MPLS VPNs.

Figure 7.16: The Layers of VPNs (The VPN Tree)

As you can see from the drawing, there are many VPN solutions. We will work our way through this chart to discover how MPLS developed historically, define its concepts, and identify the benefits and challenges of each of the various MPLS VPN solutions.

Layer 3 MPLS VPNs

Before we talk about the details of MPLS Layer-3 VPNs, we need to introduce a model of VPN called the Peer Model. In the Peer Model, the PE and CE are peers, i.e., they exchange IP routing information in a peer-to-peer relationship. The VPN tunnels are established in the core of the MPLS network.

In Figure 7.15, we saw that the CE communicates to the PE using standard routing protocols, which could be static or dynamic IGP protocols. Within the core of the network, tunnels are established between PEs. At the egress end, we see the PE and CE again communicate as peers. For the time being, we will not cover the particular method of tunnel building inside the core of the network since this method can vary depending upon the customer’s needs and which vendor’s system is selected.

Referring again the MPLS VPN tree (Figure 7.16), we see the MPLS Layer-3 VPNs on the left of the drawing with two modes under it: RFC 2547 and Virtual Routing.

RFC 2547

RFC 2547 has a simple data flow as shown. Figure 7.17 and Fig 7.18 show the flow of data across a Layer- RFC 2547 VPN. Native IP data is sent to PE1. PE1 adds the VPN label mapping to the interface and customer. PE1 then assigns an LSP and an MPLS label for that path.

142

Page 143: MPLS Training Guide Book

Figure 7.17: RFC 2547 Data Flow, Steps 1 and 2

Figure 7.18: RFC 2547 Data Flow, Steps 3 and 4

A virtual fully meshed network is established across the core network using extensions to the BGP protocol. This allows for simplification of the core routing exchange and service providers are able to use a protocol with which they are familiar.

In Figure 7.19 and 7.20, we see that the routing scheme works as follows: The CE is connected to a PE and exchanges routing tables using any one of a number of interior routing protocols (IRP), including RIP, OSPF, IBGP, EIRGP. The routing tables are sent to the far-end PEs via a BGP. At the far end, the PE routers send the forwarding tables to the CE routers.

143

Page 144: MPLS Training Guide Book

Figure 7.19: RFC 2547 Routing Exchange, Steps 1 and 2

Figure 7.20: RFC 2547 Routing Exchange, Steps 3 and 4

A challenge arises when sites have internal IP addresses (as permitted by RFC 1918). In order to keep these sites separate, the PEs add a route designator to the IP address as illustrated in Figure 7.21

144

Page 145: MPLS Training Guide Book

Figure 7.21: Independent IP Address

Figure 7.22: Independent IP Address with Router Designator

RFC 2547 has gained significant community acceptance. However, there are two shortfalls to this procedure (see Table 7.2): Table 7.2: RFC 2547 Summary

145

Page 146: MPLS Training Guide Book

1. The customer’s routing tables are exposed to the service provider.2. Since all connections are advertised using one instance of BGP across the core, a misconfiguration could

result in exposure of competitors’ information and data.

Virtual Routing

An alternative to RFC 2547 is the Virtual Router (VR) architecture (Figure 7.23). In this architecture, each PE maintains a virtual router for each VPN forwarding table. Fully meshed tunnels are advertised across the core using VR protocols.

Figure 7.23: Virtual Routing Network Drawing

Table 7.3 illustrates how the VR approach establishes tunnels between each site. The core of the MPLS network does not combine data from several sites. Since the data is kept separate, this design has the added benefit of additional security in that a misconfiguration will not impact security of the data. The downside of this design could prove to be one of scalability and the need for complex configuration.Table 7.3: Virtual Routing Summary

146

Page 147: MPLS Training Guide Book

In summary, Virtual Routing has some of the advantages of RFC 2547 and it provides security against accidental misrouting in the core network. The tradeoff is that the configuration is more complex.

Layer-2 VPNS

Businesses in the marketplace found that Layer-3 VPNs met only part of the end users’ requirements. Back in the early days of MPLS implementation, early adopters of the technology discovered that there was a market demand for Layer-2 VPNs as well.

Layer-3 VPNs worked well for a number of customers; however, there was a significant percentage of the marketplace using legacy systems and networks for whom a Layer-2 VPN solution would be better suited. As these needs were identified, different architectures were suggested for MPLS Layer-2 VPNs, including Virtual Private Wire Service (VPWS) and Virtual Private LAN Services (VPLS).

Referring to the VPN Tree (Figure 7.24), we can see that there are several types of Layer-2 VPNs. We will first address VPWS, VPLS, and IPLS followed by the supportive technology under each category.

Figure 7.24: VPN Tree

Virtual Private Wire Service (VPWS)

VPWS is a strong market alternative to FR and ATM services. In VPWS, the service providers provide a pseudo-wire across the network. This overlay model provides circuit emulation from customer to customer. It provides

147

Page 148: MPLS Training Guide Book

services similar to ATM and FR; however, significant cost savings can be realized using MPLS VPWS over these alternative methods. A functional top level view of VPWS is provided in Figure 7.25.

Figure 7.25: VPWS Network Design

For MPLS carriers wishing to capture the FR and ATM market place, VPWS offers rapid service conversion. Customers will be able to maintain their FR or ATM connection with the same equipment. The difference is that traffic will now be carried encapsulated in an MPLS header and run over an MPLS network.

In the summary of VPLS (see Table 7.4), we find that VPWS provides substantial Layer-2 traffic compatibility. It interfaces well with FR, ATM, HDLC, and PPP. There remain some questions regarding scalability.Table 7.4: VPWS Summary

Virtual Private LAN Service (VPLS)

In this version of MPLS VPN solutions, we find a great use for metro and extended campus applications. Ethernet LANs interface to the CE, which provides a core that acts like a Layer-2 Bridge. In a VPLS, the MPLS core can connect sites on a point-to-multipoint basis. The CE at the ingress side simply reviews Layer-2 addresses and forwards information to the CE on the egress side based upon Layer-2 switching or bridging tables as seen in Figure 7.26.

148

Page 149: MPLS Training Guide Book

Figure 7.26: VPLS Network

In Table 7.5, we see that VPLS acts like a big bridge. In this type of network design, we must consider the scalability of extended LAN services and the disadvantages of flat networks. However, VPLS offers advantages to switched campus environments. Table 7.5: VPLS Summary

IP LAN Services (IPLS)

What if the customer site does not have a Layer-2 switch, but instead has a router? The simple VPLS solution will not work in that case. Thus creating a place for additional VPN service called IP LAN Service (IPLS). In this network design, the CE devices are routers, but instead of using the IP address at the ingress of the network, forwarding decisions are based upon the MAC or Layer-2 address (Figure 7.27).

149

Page 150: MPLS Training Guide Book

Figure 7.27: IPLS Network Table 7.6: IPLS Summary

IPLS operates in a very similar manner to VPLS, but it is reserved for IP traffic only. The customer’s equipment are routers, but the PEs evaluate Layer-2 addresses making IPLS a Layer-2 VPN technology. Several different groups have predicted the marketplace for this service. These predictions vary from a acquiring a significant market share to meeting the needs of less than 1% of the marketplace. Only time will tell.

Martini

When we talk about the different offerings for MPLS VPNs, the discussion of Martini versus Kompella always comes up. How can we compare apples to oranges? The Martini draft protocols are really two sets of protocols: a transport set and an encapsulation set. These protocols combine together to form the Martini suite. This suite is designed primarily for point-to-point operations offering a viable alternative to non-meshed FR and ATM networks.

A block diagram of Martini is shown in Figure 7.28. The incoming traffic is configured into an assigned VC and a control word as shown in Figures 7.29 and 7.30. The unique feature of Martini is not the VC word, but the control word. The control word allows MPLS to carry the L2 control bits as would be carried in FR or ATM.

150

Page 151: MPLS Training Guide Book

Figure 7.28: Martini Block Diagram

Figure 7.29: Martini Header

Figure 7.30: Martini Header (Detailed)

151

Page 152: MPLS Training Guide Book

In Figure 7.30, we see the data flow across an MPLS Martini network. Notice in the Martini header details of Figure 7.29 and 7.28 that there is a portion reserved for signaling protocols. This allows the MPLS network to carry standard ATM and FR signals across the core. Martini supports and preserves FR markings and tags, including FECN, BECN, and DE. It also allows for QoS marking.

As shown in Figure 7.31, the LDPs make paths that are akin to multi-lane highways or tunnels. The VPNs travel down lanes, or virtual channels (VCs), within the highways.

Figure 7.31: Martini Tunnels

This methodology requires an MPLS label for the LDP, another label for the VC, and an optional control word.

For a Layer-2 protocol, the use of tunnel-method routing is not required. Martini L2 VPNs are scalable, and do not involve complex routing issues; however, they do involve relatively complex configurations especially for fully meshed networks. Martini is best suited for point-to-point operations as shown in Martini summary chart (see Table 7.7).Table 7.7: Martini Summary

Kompella

Kompella’s Draft RFCs support the following Layer-2 protocols: ATM, HDLC, Ethernet, Ethernet VLANs, Frame Relay, and PPP.

152

Page 153: MPLS Training Guide Book

The core network must first establish routing and switching and full mesh network of connections must be made with extended BGP as shown in Figure 7.28. The PEs communicate with other PEs using BGP to exchange routing and switching tables.

In Kompella’s protocol suite, the flow of data across the network looks like most MPLS tunneling protocols. In this case (Figures 7.32 and 7.33), the flow starts from the PE-1, where a packet is received on a sub-interface and is mapped to both an LSP and the egress PE (PE-2). PE-2 learns the routers of CE-2 via BGP.

Figure 7.32: Kompella Routing and Forwarding Exchange

Figure 7.33: Kompella Data Flow, Steps 1 and 2

153

Page 154: MPLS Training Guide Book

Figure 7.34: Kompella Data Flow, Steps 3 and 4

Kompella’s battle cry is “keep it simple.” Among all the offerings, Kompella’s is the simplest to establish. In the table in Table 7.8, we see that this protocol offers good features with less complexity than other protocols.Table 7.8: Kompella Summary

AToM

The IETF defined Architecture for Layer-2 VPNs, and Cisco developed a product to meet those requirements in “Any Transport over MPLS (AToM).” When a “datacommer” first hears of AToM, the acronym may be confusing because the OSI model has a Transport layer at Layer 4. However, AToM has nothing to do with OSI layer 4. It is designed to carry any Layer-2 traffic across the network and connect with any other Layer-2 traffic.

AToM uses a pseudo-wire concept to link Layer-2 protocols across the MPLS core. It can link TDM circuits, Frame Relay, ATM and Ethernet. Currently, AToM supports the following protocols:

ATM AAL5 over MPLS (AAL5oMPLS)

ATM Cell Relay over MPLS (CRoMPLS)

VLAN over MPLS (EoMPLS)

154

Page 155: MPLS Training Guide Book

PPP/HDLC over MPLS (PPP/HDLCoMPLS)

In the near future, with the release of Phase Two, AToM will support any-to-any connections that will allow dissimilar protocols to cross over. For example, Frame Relay will be able to talk to ATM. This service will be provided by using emulation.

Figure 7.35 provides a block diagram of an AToM network. The data is carried in packets called protocol data units (PDUs). These PDUs are encapsulated at the ingress PE and forwarded to the egress PE. The pseudo-wire carries all the data and supervisory information needed to provide Layer-2 services.

Figure 7.35: AToM Network

Point-to-point emulation is the focus of the pseudo-wire standards and the key to AToM services. With a basic understanding of the Martini standards, we can see how AToM works.

In the summary chart provided in Table 7.9, we can see that the major advantage of AToM is that it is a point-to-point, any-to-any protocol that will be able to accommodate service providers needs in years to come. The major disadvantage is that is a vendor proprietary.Table 7.9: AToM Summary

155

Page 156: MPLS Training Guide Book

Comparing VPN Types

We have now reviewed the set-up characteristics and data flow of Layer-3 and Layer-2 VPNs. Having examined the leading Layer-3 and Layer 2-VPN protocols, let’s look at their respective advantages and disadvantages in the real world.

Each MPLS VPN solution was designed to satisfy a perceived market need. At first, many thought that Layer-3 VPNs would satisfy the marketplace demands; but, as time passed, the world of Layer-2 transports became an untapped market and thus several approaches to Layer-2 VPNs were developed. We should resist comparing one standard against the other because what matters is whether any given standard can satisfy the end customer’s needs whether that customer is a telephone carrier, an ISP, or enterprise network. Does the customer have a legacy system like SNA pulling or do they need a new approach to networking from the start? So when reviewing the chart below, we should keep in mind that the best MPLS VPN solution is the one that satisfies the customer’s requirement.

156

Page 157: MPLS Training Guide Book

Practical Application

Now that we have seen the composition of the major types of MPLS VPNs, let’s have a look at some practical details of implementation. Applications vary from vendor to vendor, but there are generally three to four steps involved in establishing a Layer-3 VPN.

The steps to establishing an L3 VPN are as follows:1. Establish the core MPLS Network.2. Establish the MPLS tunnel.3. Create and map policies.4. Test and monitor.

Example 1: Layer-3 Encapsulation in MPLS

The following application example was provided by gracious permission of Riverstone Networks.

Packet Processing for L3 Encapsulation

RSVP-TE initiates the required tunnels through the core network to interconnect the required points of presence. Using any of the methods supported by RSVP-TE, explicitly routed with resource reservations or hop-by-hop, creates these tunnels. Once these tunnels link the edge routers, policy on the ingress router is used to classify and map the inbound traffic to a specific tunnel. All layer 2 information is stripped, and the native IP packet is encapsulated in the MPLS label for the specified tunnel and forwarded accordingly (see Figure 7.36).

Figure 7.36: Example Network (1)

Creating the L3-FEC with Policy

The policy statements on the ingress MPLS router use information in the protocol header to classify inbound packets into an FEC. Policy is not limited to the ability to classify purely on source or destination IP address. Classification can look deep into the header information to include Source Socket, Destination Socket, Protocol, and Type of Service, as well as the base IP information. How the profile is created will depend on the end goal. For example, classifying based on prefix would map all packets from or to a prefix to a specific LSP. If more granularity is required, the Layer-4 socket information could be used to differentiate application types.

The following is a list of the classification criteria:

RS (config)# mpls create policy <name> <classification>

dst-ipaddr-mask - Destination IP address and mask

dst-port - Destination TCP/UDP port number

proto - Protocol

src-ipaddr-mask - Source IP address and mask

157

Page 158: MPLS Training Guide Book

src-port - Source TCP/UDP port number

tos - Type of Service

tos-mask - The mask used for the TOS byte; default is 30

Once the policy is created, it must be associated to one of the existing RSVP-TE LSPs that has been created.

RS(config)# mpls set label-switched-path <name> policy

It is important to realize that this is not a Virtual Router solution. All edge routers participate in the native IP network to which they are connected; this means the route tables must have destination-based information for packets that must be forwarded from the MPLS core to the native IP network in which they reside. The core network does not have to be aware of any of the IP information that is not part of the MPLS core network; it is shielded from having to know those routes. Again, the beauty of using a dynamic signaling protocol means the configuration of the core network remains unchanged as new LSPs and policy are applied at the edge (see Figure 7.37).

Figure 7.37: Network Diagram (2)

Steps to Delivering IP over MPLS

IP over MPLS can be summarized in three simple steps:1. Build the MPLS core network

IGP is required to distribute reachability information with traffic engineering. (Optional, but recommended)

MPLS and RSVP are required on all core facing interfaces, edge and transit.2. Edge routers initiate the tunnels using RSVP-TE. 3. Define the L3-FEC, via policy, on the edges of the network and map to appropriate LSP.

Once again, it is important to remember that MPLS networks are unidirectional and need complementary label-switched paths running in each direction to facilitate bi-directional communications.

IP over MPLS Example

Using this network, the key configuration components will be described using the previous steps (see Figure 7.38).

158

Page 159: MPLS Training Guide Book

Figure 7.38: MPLS Block Diagram (3) 1. All core-based transit label switch routers have the IP knowledge for core reachability, using either OSPF-

TE or IS-IS-TE. The native IP routing information outside the MPLS core is not present in the core routing tables. MPLS must also enable on all core interfaces. A sample transit router configuration is presented below. All transit routers will follow this same basic configuration, with the obvious deviations, like IP addressing on interfaces.

2. interface create ip Core1-LER1 address-netmask 192.168.1.1/30 port gi.3.1

3. interface create ip Core1-Core2 address-netmask 192.168.1.9/30 port gi.3.2

4. interface create ip Core1-Core3 address-netmask 192.168.1.13/30 port gi.4.2

5. interface add ip lo0 address-netmask 1.1.1.1/32

6. ip-router global set router-id 1.1.1.1

7. ospf create area backbone

8. ospf add interface Core1-LER1 to-area backbone

9. ospf add interface Core1-Core2 to-area backbone

10. ospf add interface Core1-Core3 to-area backbone

11. ospf add stub-host 1.1.1.1 to-area backbone cost 10

12. ospf start

13. mpls add interface all

14. mpls start

15. rsvp add interface all

16. rsvp start

ospf set traffic-engineering on

Once the core MPLS network has been established, the edge routers need to be configured to participate as part of the core MPLS and native IP networks. Core facing interfaces require MPLS and RSVP-TE to be enabled. The interfaces facing the native IP network need to be created as participatory for the IP only network; no MPLS or RSVP-TE support is required. A sample configuration for one of the three edge routers is presented here:

interface create ip To-Core address-netmask 192.168.1.2/30 port gi.3.2

interface create ip To-NativeIP address-netmask 172.16.1.1/24 port gi.4.2

interface add ip lo0 address-netmask 3.3.3.3/32

159

Page 160: MPLS Training Guide Book

ip-router global set router-id 3.3.3.3

ospf create area backbone

ospf add interface To-Core to-area backbone

ospf add stub-host 3.3.3.3 to-area backbone cost 10

ospf start

mpls add interface To-Core

mpls start

rsvp add interface To-Core

rsvp start

ospf set traffic-engineering on 17. The options for signaling the RSVP-TE tunnel from the edge network may include the use of traffic

engineering, backup paths, or simply hop by hop. The simple example below initiates the end-to-end RSVP-TE without traffic engineering, using loopback address as tunnel start and end points.

18. After the tunnels have been configured, the policy is defined it is associated to an LSP.

Here, three sites are interconnected by LSPs, and policy maps traffic to the appropriate LSP using a match of the destination IP address. The policies are written in such a way to allow any non-local traffic to be mapped to the LSP that has a connection to that remote site. None of the routes from any of the native IP networks are found in any routers, other than the edge to which they belong.

The IP addresses located at each site are important to note in order to understand how this policy has been written (see Table 7.10).Table 7.10: IP Addresses at Each Site

Site Number IP Address Within Site

1 172.16.1.0/24

2 172.16.2.0/24

3 172.16.3.0/24

LER1mpls create label-switched-path LSP13 adaptive from 3.3.3.1 to 3.3.3.3

mpls create label-switched-path LSP12 adaptive from 3.3.3.3 to 3.3.3.2

mpls create policy Sub1Site2 dst-ipaddr-mask 172.16.2.0/24

mpls create policy Sub1Site3 dst-ipaddr-mask 172.16.3.0/24

mpls set label-switched-path LSP12 policy Sub1Site2

mpls set label-switched-path LSP13 policy Sub1Site3

LER2mpls create label-switched-path LSP21 adaptive from 3.3.3.2 to 3.3.3.1

mpls create label-switched-path LSP23 adaptive from 3.3.3.2 to 3.3.3.3

mpls create policy Sub1Site1 dst-ipaddr-mask 172.16.1.0/24

mpls create policy Sub1Site2 dst-ipaddr-mask 172.16.2.0/24

mpls set label-switched-path LSP31 policy Sub1Site1

mpls set label-switched-path LSP32 policy Sub1Site2

160

Page 161: MPLS Training Guide Book

LER3mpls create label-switched-path LSP21 adaptive from 3.3.3.3 to 3.3.3.1

mpls create label-switched-path LSP23 adaptive from 3.3.3.3 to 3.3.3.2

mpls create policy Sub1Site3 dst-ipaddr-mask 172.16.3.0/24

mpls create policy Sub1Site1 dst-ipaddr-mask 172.16.1.0/24

mpls set label-switched-path LSP23 policy Sub1Site3

mpls set label-switched-path LSP21 policy Sub1Site1

If an inbound packet from the native IP network does not match a policy, normal routing rules apply: destination-based longest prefix match. However, in the case above, since prefixes are local and not propagated to the rest of the network, the packets are discarded.

Related Show Commands

Some useful show commands are presented in this section.

A detailed look at the policies shows information about each policy, including the classification criteria and the label-switched path that is used when there is a match:

RS# mpls show policy <options>

LER# mpls show policy verbose

Name: Sub1Site2Type: L3Source address: anywhereSource Port: anyDestination address: 172.16.2.0/24Destination Port: anyTOS: anyTOS Mask: Protocol: IPUsed by: LSP12

Practical Application Summary

This is only one vendor’s setup for an MPLS VPN, but we can see that it mostly follows the same top-level steps:1. Establish the core MPLS network. 2. Establish the MPLS tunnel.3. Create and map policies.4. Test and monitor.

Now, let’s have look at two case studies where we will have further opportunity to examine applications of MPLS VPNs.

161

Page 162: MPLS Training Guide Book

Exercise 7.1: Case Study 1

In this case study, your team has been asked to explore the MPLS VPN options for a point of sales service.

The network structure is shown in Figure 7.39. Note that the VAST terminals use 64K (slotted aloha) as an upstream link; they receive 256 K of data streamed down from headquarters. This network is a hub-and-spoke design. The network uses an SNA-polled protocol, equipped with protocol spoofing to reduce polling over the satellite link.

Figure 7.39: Case Study 1

The satellite lease period will be renewable in 60 days, with an increase of service fees by 60 percent. The customer is looking at an MPLS VPN as an alternative, cost-saving service.

1.  Is MPLS VPN service appropriate, and if so, why?

2.  Which MPLS VPN service is recommended?

3.  How do you justify your recommendation?

Answers

1.  MPLS VPN is best suited for fully meshed network; a cost analysis would determine whether MPLS could produce a savings.

2.  None

3.  No recommendations

162

Page 163: MPLS Training Guide Book

Practical Application: Implementing a VPN

The approach taken to implement a VPN will vary between the vendors. Where Dr. Yakov Rekhter of Juniper advocates the use of BGP to carry routing and switching tables across MPLS networks, others advocate the use of other protocols.

Years ago, when CR-LDP and RSVP were competing for a signaling protocol, the existing network protocol (RSVP) secured more support because it had a proven track record. It will be interesting to see if the marketplace again accepts an incumbent, proven protocol over new VPN protocols.

Using RFC 2547 as the basis of carrying switching and routing tables across an MPLS network, the core network is greatly simplified. In addition, a proven single protocol is used in service provider networks.

In this section, several examples of VPN configuration sets are shown.

These drawings and detailed slides were generously provided by permission of Dr. Rekhter, a distinguished engineer of Juniper.

Figure 7.40 outlines the simple steps needed to configure RFC2547 on a Juniper router’s top level.

Figure 7.40: Top-Level RFC2547 VPN Configuration

In Figure 7.41, we see the detailed configuration commands.

Figure 7.41: Configuration Details for RFC 2547

163

Page 164: MPLS Training Guide Book

Once the router has been configured to run the RFC 2547 protocol, the choice arises as to what type of VPN will be running. In Figure 7.42, we see the top-level configuration for a Layer-2 VPN, followed by detailed configurations in Figure 7.43.

Figure 7.42: Layer-2 VPN Top Level

Figure 7.43: Detailed VPN Configuration

Note that the code between Figures 7.43 and Figure 7.44 does not change. We find that the configuration summary is about the same for VPLS.

Figure 7.44: Top-Level VPLS Configuration

164

Page 165: MPLS Training Guide Book

In this example of VPN configuration, we find that configurations remain simple once RFC 2547 has been configured, no matter what Layer 2 VPN is running.

165

Page 166: MPLS Training Guide Book

Exercise 7.2: Case Study 2

Rag-Tag Books has asked you to assist them in converting from their full-mesh ATM network (shown in Figure 7.45) to an MPLS VPN. Each store has four connections. They are running a variety of ATM traffic, with uncommitted information rate comprising most of their traffic. However, QoS is required for the 10 percent of their traffic that runs the SAP database.

Figure 7.45: Case Study 2

1.  Is MPLS VPN service appropriate, and if so, why?

2.  Which MPLS VPN service is recommended?

3.  How do you justify your recommendation?

Answers

1.  Fully meshed networks could produce a cost justification for MPLS service.

2.  In order to recommend a VPN service, you would need to know whether the interfaces are Layer 2 or Layer 3. Additional information is needed

3.  Further study is needed to make a VPN recommendation. No recommendations were made because the interfaces and requirements ere not clearly defined.

166

Page 167: MPLS Training Guide Book

Chapter Summary and Review

In this chapter, we covered VPN formats in an MPLS system, compared several leading L2 VPN proposals, and learned how data and routing are processed in L3 VPNs. In addition, we conducted two case studies to give customers an MPLS VPN alternative to higher-cost networks.

Knowledge Review 

Part One: Compare and contrast the following characteristics of three Layer-2 VPNs (LT2Pv3, Martini, and Kompella).

1. Headers2. Forwarding tables3. Interface capabilities ________

Part Two: Answer the following questions.1. How are routing tables passed between CEs? 2. How are non-globally unique addresses processed and separated?

________

Part Three: Answer the following multiple-choice questions by selecting the option that best answers the question.

1. VPN tunnels can be built from which of the following?

A. PE-to-PE ________

B. PE-to-CE ________

C. CE-to-CE ________

D. Both A and C ________

E. Both B and C ________2. Which of the following is not used in PE-PE VPNs to connect to customer

sites?

A. Ethernet ________

B. Frame Relay ________

C. ATM ________

D. Secondary Tunnel ________

E. Virtual Channel ________3.

Going Further

167

Page 168: MPLS Training Guide Book

There is much written on MPLS. The following documents are excellent resources, rated in terms of reading value for MPLS VPNs.

IP/MPLS-Based VPNs: www.foundrynet.com/solutions/appNotes/PDFs/L3vsL2.pdf ** (Foundry)

Layer 2 and Layer 3 VPN over IP/MPLS: www.apricot2002.net/download/Conference/09/02.pdf ** (Apricot /Cisco)

Layer Three Encapsulation in MPLS: www.riverstonenet.com/support/mpls/layer_three_encapsulation_in_mpls.htm *** (Riverstone)

Light at the End of the L2TPv3 Tunnel: www.nwfusion.com/cgi-bin/mailto/x.cgi ** (nwfusion.com)

MPLS Technology Brief (Mexico 9/2002): www.riverstonenet.com/pdf/mexico_mpls_2002.pdf *** (Riverstone)

RFC 2547bis BGP/MPLS VPN Fundamentals: www.juniper.net/techcenter/techpapers/200012.html **(Juniper)

Virtual Leased Lines – Martini: www.riverstonenet.com/support/mpls/Virtual Leased Lines - Martini.htm *** (Riverstone)

Virtual Private Networks: www.ensc.sfu.ca/~ljilja/cnl/presentations/ william/vpn_slides/sld001.htm **** (Juniper)

Ratings:

**** Absolutely essential reading

*** Excellent

** Very good

Special thanks to vendors Cisco, Juniper, and Riverstone. Riverstone generously provided the practical applications section from its white papers.

Chapter 8: Quality of Service Meets MPLS

168

Page 169: MPLS Training Guide Book

Introduction

Quality of Service (QoS) is an issue that has long plagued the telecommunications industry. Each advance in networking must be coupled with an advance in a provider’s standards for ensuring consistency and accuracy of data transmission. With a technology as new and as innovative as MPLS, Quality of Service assumes particular importance. In this chapter, we will discuss the elements of QoS, the QoS myths and pitfalls to avoid, and the various options for tailoring a network to attain complete, comprehensive, and consistent Quality of Service.

Introduction to QoS

169

Page 170: MPLS Training Guide Book

Let’s take a trip down memory lane. Imagine walking into a restaurant in your hometown and being greeted by the owner, who shakes your hand. He knows both your name and your favorite meal. You know that you will experience true quality service, for the owner consistently checks speed of delivery, taste and presentation of the food, and staff performance to ensure that you have a worthwhile dining experience.

In the telephone industry, Quality of Service (QoS) has always been an issue. Technicians consistently monitor telephone lines to ensure that every word can be heard with a high degree of accuracy. The “pin-drop” company is so sure of its quality that it advertises that you can hear the dropping of a pin across one of its connections.

In data communications, QoS has been an issue since the possibility of running Voice over IP (VoIP) first aired. In order to achieve voice “toll” quality, call-voice datagrams had to arrive in a timely manner, dropping hardly any packets on the path. VoIP is now a reality that has been implemented around the world with a good success rate. Contrary to predictions, however, a sharp drop in QoS awareness has taken place as VoIP has increased in market share.

Two years ago, QoS was a real concern. There was a dedicated web site called the QoS Forum; it is no longer in service. The web keywords “QoS” and “Quality of Service” score very few hits on Google. More networks are being designed with massive excesses in bandwidth as a means of controlling Quality of Service issues.

I suggest not only that problems Quality of Service have not disappeared, but also that they assume a new dimension with each emerging technology. Over-provisioning and over-engineering do not solve existing QoS problems. The current industry myth is that it is more cost effective to “buy” 200% more bandwidth than a network requires than it is to worry about QoS. However, contrary to popular belief, bandwidth alone cannot solve QoS problems. In this chapter, we will discuss the three primary elements of QoS, explain why bandwidth alone cannot solve QoS issues, examine how a combination of measures can effectively address QoS challenges, and explore genuine QoS solutions.

What Is QoS?

170

Page 171: MPLS Training Guide Book

Quality of Service (QoS) is not a new concept. In almost any hotel or restaurant, you will find a Quality of Service survey. Some survey instruments use a rating scale from 1-5, in which 1 equates to very poor service and 5 equates to excellence. This subjective data (soft data) is used to improve Quality of Service and customer satisfaction.

A similar scale is used in the telephone industry. A high-quality voice call receives a rating of 5, whereas a very low-quality call receives a rating of 1. The official name for this scaling method is the Mean Opinion Score (MOS—see www.ciscoworldmagazine.com/monthly/2000/06/breit_0006.shtml and Figure 8.1).

Figure 8.1: MOS Scale

Telephone companies have used this scale for years as a determinant for toll-grade calls. If the MOS rating for a call exceeds 4.0, then that call is said to be of toll grade. The datacom industry has never used such a rating, because data communication was never intended to operate in real time. In data communications, three measurements are used to determine the Quality of Service. These measurements are dropped packets, jitter, and latency.

When sending data, the basic transport vehicle of IP is not reliable. Packets can be lost, dropped, or never delivered for several reasons – especially when the network gets busy. Typically, there is a direct correlation between network utilization and the percentage of dropped packets – as network utilization increases, so does the percentage of dropped packets (see Figure 8.2).

Figure 8.2: Dropped Packets vs. Network Utilization

The human ear is a relatively forgiving instrument that can tolerate some percentage of packet drops without noticing a drop in performance or MOS score. However, when the percentage of dropped packets increases, syllables or whole words are lost from sentences. When the number of dropped packets starts to exceed 1.5%, there is a perceptible performance change. The percentage of dropped packets is a major contributor to a poor MOS reading.

Measuring QoS in Data Networks

171

Page 172: MPLS Training Guide Book

In addition to packet drops, two other QoS measurements must be considered: jitter and latency. Combined together, these elements are experienced by the end user as a transmission delay.

Latency is the amount of time that it takes a signal to move from point A to point B in a System Under Test (SUT) with no load conditions. Figure 8.3 shows an SUT in which the signals applied to input A are sent through the SUT to the output B. The leading edges of A and B are compared, and the delta measurement becomes the latency measurement. Since latency is consistent, test measurements should be repeatable.

Figure 8.3: Latency Measurements

When a system experiences heavy loads, the data must be buffered and queued as a result. When an SUT is subjected to a heavy load, the signal out of port B will vary in its amount of delay. This variation is inconsistent and unpredictable. Using an oscilloscope, the packet at probe B appears to jump back and forth across the oscilloscope screen. This rapid back and forth movement is called jitter, and its presence is an unpredictable element in packet delivery (see Figure 8.4).

Figure 8.4: Jitter Measurements

Mediation for Dropped Packets, Latency, and Jitter

System latency under light-load conditions can be controlled only by good end-to-end equipment selection. System latency is the cumulative (end-to-end) value of all equipment latency measurements, plus the latency of the links.

172

Page 173: MPLS Training Guide Book

Controlling bandwidth utilization can, in turn, control both jitter and dropped packet percentage rate. Since latency is a measurement of delay that is caused by the movement of electrons across a system, latency cannot be controlled in real time. Low latency must be designed into a network from the start.

As network utilization increases, so to do the problems of jitter and dropped packets. In an effort to give better performance, many systems are over-designed with more bandwidth than is needed in order to help ensure that these problems do not occur.

The chart in Figure 8.2 illustrated an increase in dropped packets as the system under test approached 80% utilization (thereby making the MOS score unacceptable), but a system running under various load conditions, as shown in Figure 8.5, shows acceptable MOS and dropped-packet percentages.

Figure 8.5: Low Utilization with Low Errors

Testing Networks

Over the past 4 years, we have been testing networks for QoS. There are several methods for testing networks or systems. A good laboratory method would be to use a product like “smart-bits” to generate traffic and to measure the results.

A simple experiment that most engineers can perform that limits cost uses NetMeeting and a protocol analyzer to measures voice quality, jitter, and dropped packets. Figure 8.6a shows a typical setup.

173

Page 174: MPLS Training Guide Book

Figure 8.6a: Network Under Test for Quality of Voice Calls

We have tested networks and network components and we have found that network devices operate like most electronic devices. As long as they are operated within the linear range of the device, performance is good.

When the devices are forced to operate in a non-linear range, performance problems are noticed. This can be seen in Figures 8.6b and 8.6c. In Figure 8.6b we see that as utilization goes up, the number of dropped packets increases in Figure 8.6c we see that the MOS score decreases as utilization increases.

Figure 8.6b: Percentage of Dropped Packets vs. Percentage of Load

174

Page 175: MPLS Training Guide Book

Figure 8.6c: MOS Score Is Inversely Proportional to Load

What surprises most customers when we perform this test is how different voice is over data. Voice performance has failed to pass the test criteria with as little as 15% continuous broadcast traffic.

These results differ across networks and network elements. This is why it is advisable for all organizations considering running VoIP to test their own network

By looking at these performance charts, we would think that a simple solution is to keep utilization below the saturation point of the network so that devices and the network perform well. This strategy is known as the “throw bandwidth at the problem” solution.

Bandwidth Does Not Solve the Problem

Over-designing a network and throwing bandwidth at QoS problems is only a temporary fix – not a solution. There are several reasons why bandwidth alone will not achieve true Quality of Service: The “if you build it, they will come” phenomenon. The faster the network is, the more user traffic it will

have. More user traffic means more bandwidth demand and so on; Using your data network for VoIP calls; or There is a link failure and you take a “loser’s path.”

If you equate bandwidth to a four-lane highway that is equipped to handle a load of only 20 cars per minute, then what happens when the traffic demand suddenly exceeds 20 cars a minute? What happens when construction causes other traffic to be routed on your highway? What happens when there is an accident? Bandwidth alone does nothing to address the three elements that are needed to achieve real QoS – marking, classifying, and policing packets.

For the last several years, attempts to achieve end-to-end QoS have been made, with marking protocols (802.1Q/p, DiffServ, and MPLS), reservation protocols (such as IntServ and RSVP), and policing devices (such as policy switches).

Checkpoint  Answer the following true/false questions.1. Latency is a measurement that is performed under traffic load conditions. 2. Load conditions do not affect jitter. 3. Packet drops under 10% are considered good.

Answers: 1. False, 2. False; 3. False.

175

Page 176: MPLS Training Guide Book

Packet Marking

Packet-marking procedures can be broken down into marking, classifying, and policing procedures. Marking protocols, such as 802.1Q/p (Figure 8.7), DiffServ (Figure 8.8) and MPLS (Figure 8.9), are able to sort and mark packets according to pre-established rules.

Figure 8.7: QoS Markings for 802.1Q/p

Figure 8.8: QoS Marked on the Network Layer DiffServ

176

Page 177: MPLS Training Guide Book

Figure 8.9: QoS Marked on the MPLS Shim Header

You can think of these protocols as existing on three different levels: 802.1Q/p is a LAN protocol, DiffServ is a MAN protocol, and MPLS is a WAN protocol. 802.1Q/p marks a packet with priority bits in its Layer 2 header; however, this layer does not migrate across a router.

In Figure 8.7, we see that the 802.2Q protocol inserts a new header behind the traditional Ethernet header. Three bits of the 802.1Q header are used to mark packet priority. It is interesting to note that this marking is not routable. So other forms of marking must be used in order to route priority marked packets.

To get the markings across the router, a packet must be marked at Layer 3 or above. DiffServ marks a packet for migration across routers. Figure 8.8 shows DiffServ code points. These are used to carry packet QoS markings in routed environments; however, Diffserv code points are not always used. MPLS works at Layer 2.5; a method must be used to mark packets so that MPLS switches can understand the QoS requirements. MPLS may use three EXP bits to transport QoS markings across an MPLS network as seen in Figure 8.9.

We see in Figure 8.10 that packets are marked at several levels: Layer 2, Layer 2-shim, and Layer 3.

Figure 8.10: Packets Marked in Three Places

Figure 8.11 illustrates how marked packets can give a network designer end-to-end coverage for QoS. Despite markings, these protocols cannot assure instantaneous bandwidth. Think of bumper-to-bumper, rush-hour traffic

177

Page 178: MPLS Training Guide Book

when an ambulance, with lights flashing and siren blaring, attempts to move through traffic. Despite its markings, the ambulance cannot move.

Figure 8.11: End-to-End QoS Marking

178

Page 179: MPLS Training Guide Book

Reservation/Policing Protocols

A bandwidth reservation protocol called RSVP (ReSource ReserVation Setup Protocol) is used to establish a reservation for bandwidth from end to end. The IntServ protocol establishes three classes of reservations that are similar to first class, coach, and standby in the airline industry. In IntServ, these classes of service are called guaranteed, control load, and best effort, respectively.

The primary advantage of RSVP is that it checks for bandwidth before a call is established, and then carves out bandwidth from end to end. Think of RSVP as the Secret Service, clearing highways for the president. RSVP goes in front of the traffic and carves out a space for the call.

End-to-end QoS can be achieved through a combination of marking and reservation protocols, as shown in Figure 8.12.

Figure 8.12: End-to-End QoS with RSVP Checkpoint  Answer the following questions.

1. List three packet-marketing protocols.2. True or false: Packet marketing assures QoS/CoS treatment.

Answers: 1. 802.1Q/p, DiffServ, MPLS; 2. false.

179

Page 180: MPLS Training Guide Book

Policy-Based Packet Policing

Traffic policing could sometimes be called Layer 4 switching. A device that reads the port numbers and manages traffic based on port numbers can be inserted into the network just before the router. Some people also call this method Common Open Policy System (COPS). In Figure 8.13 we see a network with a policing box in place.

Figure 8.13: Applications Policing

COPS units can manage traffic in several ways; one is to change the sliding window size in TCP sessions. Other methods would include refusing connections for less important traffic.

After traffic is policed, it can be further managed using queuing management. Queue management is also used to control traffic behavior within devices. There are several methods to manage queues. One method that randomly drops packets when queues become full is called Random Early Detection (RED). Other methods allow engineers to establish a priority system to determine how packets can be dropped. These methods include Fair-Weighted Queuing (FWQ), Weighted Fair-Weighted Queuing (WFWQ), and Priority Queuing.

One method used to achieve QoS is not a protocol at all, but a piece of hardware. A policy-based switch is placed on the edge of a network, between the router and firewall, to monitor, mark, classify, and police traffic. One vendor calls its box a packet shaper and another calls it Net Enforcer. A policy-based switch monitors traffic by looking at packet content at Layers 2-4, marking the packets according to pre-established policies.

How Can You Monitor and Police These Problems?

Here is a brief checklist to help you handle these problems: Test your network to determine your saturation point under test conditions. This measurement

varies greatly. Some networks saturate at 8%, while others saturate at 80%. You will find that the data-failure point differs from the voice-failure criteria.

Continuously monitor your network utilization. Know your peak busy day, peak busy hour, and peak bandwidth usage (i.e., which of your network’s applications are bandwidth hogs). Although there are several systems available commercially to track this data, I have found the best overall system to be the Finisar Surveyor 4.1.

Police your network. In order to gain full QoS functions, you have to be able to police bandwidth in order to prevent applications from monopolizing your network. Vendors make multi-layer switches that incorporate Layer-4 policy switching (e.g., the Nortel Business Policy Switch 2000). I personally prefer stand-alone components to perform policing and traffic accounting. The argument is as old as the stereo-system arguments that pitted integrated systems against component-based systems. I like to have component-based policing and accounting, because it gives network analysts control over their data.

180

Page 181: MPLS Training Guide Book

There are several policy-based policing “switches” available on the market. I have had good luck with Allot’s NetEnforcer (www.allot.com/html/products_netenforcer.shtm) for policy-management and accounting purposes.

More advanced policy switches (www.allot.com/html/products_netenforcer.shtm) allow a network manager to even segregate one protocol (port) into several elements. For example, take HTTP running on port 80. In that port, several applications can run, from web conferences to downloading MP3s. Some of these applications have a higher priority than others.

A policy-based device allows for policing bandwidth per application, and it also provides accounting services. You can determine the most-used applications and track when and how they have been used.

181

Page 182: MPLS Training Guide Book

QoS in MPLS Networks

Now that we have learned about how to achieve QoS from end to end, we need to take a look at how MPLS can assist us in achieving end-to-end QoS.

QOS and COS

First, let’s explore the difference between QoS and CoS (Cost of Service).

CoS is a term that is used in ATM networks and is defined by ATM standards. CoS allows for traffic to be placed into different queue.

QoS defines ways to achieve traffic behavior that is objectively measurable. QoS guarantees end-to-end performance.

In CoS, many people think of QoS as it relates to Frame Relay or ATM.

Groupings: Unreliable, don’t-care applications Unreliable, time-sensitive applications (VoIP) Reliable, non-time sensitive applications Reliable, time-sensitive applications

These groupings (Figure 8.15) could be broken down into ATM types, such as CIR, VIR, and UBR jitter. Some of these are latency applications (such as SNA networks) or synchronized databases.

Long before the days of MPLS, ATM and Frame Relay provided Quality of Service, and carriers were committed to delivering levels of service as defined and policed by the FCC. There is much concern as to whether MPLS can accommodate these QoS requirements, and as regards the ability of MPLS to satisfy a given Service Level Agreement (SLA). The fines are stiff for SLA violations, so public carriers are cautious about adopting the new technology (and QoS measures for MPLS). As you can see from Figure 8.14, there are methods to map CoS and QoS parameters.

Figure 8.14: QoS – CoS

182

Page 183: MPLS Training Guide Book

Figure 8.15: CoS – QoS Mapping

Myriad levels and mechanisms exist for achieving QoS. One can think of QoS groupings as being analogous to flight bookings with an airline – first class, coach, and standby. In IP, we call these grades of service Guaranteed, Control Load and Best Effort.

In addition, QoS can be defined with far greater granularity than this would suggest, but there are issues of manageability and marketing. Just how many levels of service do the clients demand, and what are the operational costs of providing these services?

Having cleared that up, let’s look at how to map traffic to MPLS QoS, examining problems and points at which a network needs to be managed.

Mapping L-LSP VS. E-LSP

So far we have shown that markings from the LAN or WAN could be mapped directly to the MPLS header using the EXP bits. This method has become known as the E-LSP method (or EXP-inferred-PSC LDP). With only three bits in the EXP field, there are eight (8) classes that can be mapped.

The other method of mapping CoS/QoS is to mail a label to an FEC with QoS parameters. For example, label 100-200 would be First Class on LSP-A. This method is called the L-LPS method (Label-only-inferred-PSC-LDP). The L-LSP method is more flexible than the E-LSP method, but to date it has not been implemented. See Figure 8.16.

183

Page 184: MPLS Training Guide Book

Figure 8.16: E-LSP /L-LSP

IP Traffic Trends

Service providers typically subscribe CIR rates at a 1-1 ratio and VBR at a 3-1 ratio. They generate a great deal of revenue by subscribing their IP traffic at a 50-1 ratio.

This over-subscription at the edge of the network generates profits, but causes unpredictable behavior in the network. We find that, even if the core of the network has ample bandwidth, QoS problems surface during peak busy hours because the edge routers are overworked; not having sufficient instant bandwidth, they experience queuing delays or even lost packets.

In Figure 8.17, we see so many packets attempting to enter the router at one time that only a few can squeeze through.

Figure 8.17: Too Many Packets Trying to Enter Router

This traffic must be managed, and queue controls must be in place in order to avoid irreparable loss of service. Several queuing methods could be used in this situation. The simplest method is Random Early Detection (RED). The RED method looks at a queue and determines when traffic should be discarded.

184

Page 185: MPLS Training Guide Book

In Figure 8.18, we see the basic rules of RED. There is only so much memory in a queue, and when it becomes saturated, non-optimal (bad) things begin to happen to the packets. In RED, upper and lower limits (thresholds) are set.

Figure 8.18: Basic RED Rules

In this case, 40% is set for the lower limit and 90% is set for the upper limit.

The rules are simple: all traffic that is below the lower limit (threshold) will be preserved, and all traffic that extends beyond the upper limit will be discarded.

Traffic between the upper and lower limits has a probability of being discarded, and the probability of discard increases as the number of packets increase.

In Figure 8.19, you can see the response curve for RED performance.

Figure 8.19: Simple RED Queuing Response Curve

Figure 8.20 shows the queue before RED is turned on. With RED turned on, we get traffic shaping, and the queue looks like the image in Figure 8.21.

185

Page 186: MPLS Training Guide Book

Figure 8.20: Traffic to Be Queued

Figure 8.21: Dropped Packet Percentages after RED Shaping

The main problem with RED is that it discards packets regardless of importance or any QoS standards. Figure 8.22 illustrates the problem with RED.

186

Page 188: MPLS Training Guide Book

Figure 8.24: Priority Bits in IP Header

In Figure 8.24, we see that packets whose priority bits are set high get discarded much later, and that they have a lower probability of discard. In the event of reaching or exceeding maximum threshold, all packets get discarded, regardless of markings.

Several vendors have chosen to use the ToS/Priority field for WRED (Weighted Random Early Detection – RED with a weighted algorithm). In this case, if the priority field is marked 000, the packets are highly eligible for discard; when they are marked 111, they are least likely to be discarded.

In using RED or WRED, edge devices are able to manage bursty traffic even if it exceeds the bandwidth.

Other than queue management, how can QoS be handled? The bottom line is that QoS in an MPLS network can be treated like QoS is in an IP network. We can use over-provisioning, DiffServ, RSVP/IntServ, and queue management. The issue that confronts many implementers is a matter of how IP packets will relate to MPLS QoS. The problem lies in that fact that customers may or may not be able to manage QoS in their networks, i.e., they may have marked packets or they may not have marked packets.

Let’s look at the simplest implementation of QoS in an MPLS network – that is, packets that are not marked for QoS when they are delivered to the demark.

QoS in MPLS Without Markings

Customers may not be able to mark their packets for special treatment, but they may need to separate traffic that is bound for one destination by application type. For example, a production application may require a CIR (Committed Information Rate) treatment, where VoIP may require a VIR treatment, and HTTPS requires a higher priority than e-mail.

If the customer has not marked any packets, then the ingress LER can be set to map traffic according to port number. The traffic can than be mapped to a LSR. For example, LSRs can sustain traffic engineering for CIR, VBR (Variable Bit Rate), and UBR (Unspecified Bit Rate), and can be provisioned for 1-1, 3-1, and 20-1, respectively. This simple mapping of traffic to LSRs could be called L-LSR (Label-Based LSR) QoS.

Mapping MPLS to an FEC by port number or application is a very simplistic method of achieving some level of QoS (Figure 8.25) , but it does not solve the problems of unpredictability. Recalling that QoS entails marking, classifying, and policing traffic, we must ensure that we have instantaneous bandwidth. Mapping unmarked packets to an LSP does give some level of protection, but it does not fully address all the issues.

188

Page 189: MPLS Training Guide Book

Figure 8.25: QoS without Marked Packets

MPLS with Pre-Marked Packets

We learned when discussing theory that an Enterprise network could mark packets for QoS. The marking protocols are: 802.1Q/p markings, precedence/ToS bit markings, and DiffServ markings.

Figure 8.26 shows the traditional marking of precedence bits. These bits can be marked by the clients and used not only for WRED, but also for packet treatment within the network. The core running MPLS at Layer 2.5 does not see these precedence markings. In order to ensure that packets are afforded proper QoS treatment in the core, these bits must be mapped to the MPLS header.

Figure 8.26: Precedence Bits Marked

One of the functions of an LER is to take these precedence bits and map them directly to the experimental (Exp) bits in the MPLS header. The Exp bits in the MPLS header can be read and interpreted by the core routers. A bit pattern of 000 could mean, “treat as best effort”, where a bit pattern of 111 could mean, “treat as highest priority”, “do not discard”, and so on.

Vendors have each implemented bit mapping differently. Some vendors map precedence bits to Exp bits by default, some vendors don’t map these bits at all, and still others allow for a ToS mask, wherein any combination of bits can be mapped.

189

Page 190: MPLS Training Guide Book

Figure 8.27: ToS and DiffServ bits relationship

Use of DIFFSERV Markings

Other customers may deliver packets that are marked with DiffServ instead of with precedence bits. In Figures 8.28 and 8.29, we see the relationship between the IPv4 ToS field and the DiffServ field.

Figure 8.28: Precedence Bit Mapping

190

Page 191: MPLS Training Guide Book

Figure 8.29: ToS Bits Copied to Exp Bits

In Figures 8.27 and 8.28, we see that DiffServ is really two classifications of traffic: one classification is that of class; the second classification is that of drop precedence. If you stand in airport lines as much as I do, you can easily see this in action.

At the airport, there are lines to the counters – regular customers, frequent flyers, and first-class passengers. The queues vary in size, but a typical scenario is one in which the regular customer line is very long and the first-class line is short. The same is true with routers; the volume of high-priority traffic will be much lower than that of routine traffic.

So, we have established the classes of traffic, but what about drop precedence? You are standing in the routine line, but you get a tap on the shoulder, and you are asked to step out of line and use the rapid-ticketing line instead. Or, you are in the first-class line, but there is only one agent available to issue tickets – you see that the routine line is moving faster, so you drop out of the first-class line and go to the routine line instead.

What do you do when the line in which you’re standing line is too long, and you have a flight to catch? Packets experience the same issue. Routers can choose to keep a packet in line, drop it, or mark it as discard-eligible. This is the second part of the DiffServ field. The combination of class and drop precedence is expressed in a special notation, such as AF (assured forwarding): e.g., AF XY. X=class and Y= drop precedence, so the notation of AF 11 would mean, “class one, drop precedence 1”. In this DiffServ game of marking and processing, numbers are valued as they are in a golf game – which is to say that the better score is the lower score. For example, AF11 is better than AF 21.

In Figure 8.30, we see the details in action at the bit level. The eight bits that were used for the ToS field now become DiffServ code points (DSCP) and currently unused (CU) bits. In Figure 8.30, we see a further breakdown of the DSCP into two prominent sections: a class and a drop precedence (DP) field. In Figure 8.31, we see how these bits are mapped into class and drop precedence so that an AF11 is a bit pattern of “001 01 0 00”

191

Page 192: MPLS Training Guide Book

Figure 8.30: Detailed DiffServ Code Point Format

Figure 8.31: Details of Bit Pattern for AF 11

We can only map 3 of the 6 available bits to the Exp field; many vendors have chosen to map the low-order three bits (or class bits) to the Exp field.

DiffServ assures that the tunnel has the required traffic policing characteristics; it marks, classifies, and polices. It cannot guarantee, however, that bandwidth is going to be available when you need it. DiffServ is DiffServ, and whether it is in IP or MPLS, it does not check for bandwidth before a call is placed.

In order to achieve instantaneous bandwidth from end to end, we need to add the RSVP protocol. RSVP checks for BW before a call is placed, and it continues to request bandwidth for ongoing messages or flows.

RSVP is a per-flow QoS process where DiffServ is a per-tunnel process. RSVP gives a great level of QoS control, but overhead increases with its implementation. You may choose to mix-and-match RSVP and DiffServ, or to use only one method. All of these decisions will hinge on the needs of your customer base.

In Figures 8.32–8.34, we see that achieving end-to-end QoS requires deploying several QoS methodologies: over-provisioning, queue management, DiffServ, and RSVP/IntServ.

192

Page 193: MPLS Training Guide Book

Figure 8.32: What Is Needed for End-to-End QoS?

Figure 8.33: MPLS End-to-End QoS Process

Figure 8.34: QoS per MPLS Elements

193

Page 194: MPLS Training Guide Book

Practical Applications: Mapping ToS Bits to Exp or FECs in Riverstone Routers

Setting The Exp Bits[1]

The Exp bits are set by creating an ingress policy on the ingress LSR. This ingress policy sets the Exp bits in relation to values associated with the frames and packets traversing the LSP. For example, if a VLAN trunk port is tunneled through the LSP, the EXP bits can be set by directly copying the values contained within the three 802.1p priority bits of the 802.1Q headers. Once packets/frames have reached the egress LSR, an egress policy can be created on the egress LSR that maps the Exp bits back into the bit values of the packets or frames.

Figure 8.35 shows an example of bits being copied on ingress into the Exp bits, and then copied back to the packet on egress.

Figure 8.35: Copy Bits Directly to and from Packets Traversing the LSP

Alternately, tables can be configured (using the mpls create command) that map the ingress packet/frame bits to the Exp bits - while other tables can be created that map the Exp bits back to packet/frame bits on the egress. The table method has the advantage of allowing flexibility on how packet/frame bits are mapped to the Exp bits.

For example, in Figure 8.36, an ingress table has been created (using the mpls create tosprec-to-exp-tbl command) that maps incoming ToS precedence bits to the Exp bits. In this example, the ToS precedence bits are read from incoming packets then compared against the table to determine how to set the Exp bits. Within the table, any ToS precedence value can be mapped to any Exp bit value - for instance, in Figure 8.36, ToS precedence bit value 6 (110 binary) is mapped to Exp bit value 2 (010 binary).

194

Page 195: MPLS Training Guide Book

Figure 8.36: Setting the Exp Bits Using a Mapping Table

The following is the command that creates the ingress table mapping in Figure 8.36:

rs(config)# mpls create tosprec-to-exp-tbl <name> tosprec0 0 tosprec1 1

tosprec2 6 tosprec3 3 tosprec4 7 tosprec5 2 tosprec6 5 tosprec7 6

A second table can be created on the egress LSR that maps the Exp bits back to bit fields within packets or frames.

Note  The DSCP bits cannot be directly copied into the Exp bits of the MPLS label. The mapping of DSCP bits to Exp bits must always be done using a table.

The following lists the possible tables that can be used to map packet/frame bits to the Exp bits.

On ingress:

802.1P bits to Exp bits (mpls create 1p-to-exp-tbl <name>)

DSCP bits to Exp bits (mpls create dscp-to-exp-tbl <name>)

Internal priority bits to Exp bits (mpls create intprio-to-exp-tbl <name>)

ToS precedence bits to Exp bits (mpls create tosprec-to-exp-tbl <name>)

On egress:

Exp bits to 802.1P bits (mpls create exp-to-1p-tbl <name>)

Exp bits to DSCP bits (mpls create exp-to-dscp-tbl <name>)

Exp bits to ToS precedence bits (mpls create exp-to-tosprec-tbl <name>) Note  There is no facility for mapping Exp bits to internal priority bits on egress. However, Exp bits that

were set on ingress using the internal priority bits can be mapped to other packet/frame bits on egress (DSCP, 802.1/p, and so on).

Creating and using tables is covered in more detail within the following sections.

195

Page 196: MPLS Training Guide Book

Creating Ingress and Egress Policies

This section illustrates the basic steps for creating both an ingress and egress policy for mapping packet and frame bits to and from the Exp bits. Configuration examples for both Layer-2 and Layer-3 traffic are presented at the end of this section.

Layer-2 Ingress and Egress Policies

The following steps outline the process for creating a Layer-2 ingress policy on the ingress LSR. Note  If using table matching to map frame bits to the Exp bits, use the mpls create command to

specify the type of table, the table’s name, and its contents.

Use the ldp set l2-fec command to apply the ingress policy to Layer-2 traffic.

The following steps outline the process for creating a Layer-2 egress policy on the egress LSR. Note  If using table matching to map Exp bits to frame bits, use the mpls create command to specify

the type of table, the table’s name, and its contents.

Use the mpls set egress-l2-diffserv-policy command to apply the egress policy to Layer-2 traffic. Note that the egress policy is applied globally to all Layer-2 traffic traversing the LSP.

Layer-3 Ingress and Egress Policies

The following steps outline the process for creating a Layer-3 ingress policy on the ingress LSR. Note  If using table matching to map packet bits to the Exp bits, use the mpls create command to

specify the type of table, the table’s name, and its contents.

Use the mpls set ingress-diffserv-policy command to apply the ingress policy to Layer-3 traffic.

The following steps outline the process for creating a Layer-3 egress policy on the egress LSR. Note  If using table matching to map Exp bits to packet bits, use the mpls create command to specify

the type of table, the table’s name, and its contents.

Use the mpls set egress-l3-diffserv-policy command to apply the egress policy to Layer-3 traffic. Note that the egress policy is applied globally to all Layer-3 traffic traversing the LSP

[1]The following text provided by the permission of Riverstone

196

Page 197: MPLS Training Guide Book

Chapter Summary and Review

In this chapter, we have discussed the importance of measuring, marking, and policing packets in a data network, and we have identified the three measurements historically used for measuring QoS – latency, jitter, and dropped packets. In addition, you have seen that QoS can also be measured with perceived quality measurements, such as MOS.

Packets can be marked at several levels: in the Local Area Network using 802.1Q/p, in an Intranet using DiffServ, and over the WAN using MPLS. Marking packets alone does not provide for QoS. In addition to marking, you must check for instantaneous bandwidth and also reserve bandwidth. Even with these protocols running, a policing function is needed.

To police networks, a policy-based device is needed. Some systems have built-in policy systems, such as a multi-layer system of switches or a stand-alone, policy-based switch. In all cases, in order to achieve true Quality of Service, packets must be managed (i.e., packets must be marked, classified, and policed throughout the network).

The pendulum swings both ways. Those in the service-and-hospitality industry have found that, as money gets tighter and people travel less, Quality of Service is once again an issue.

It the telecommunications world, that pendulum can swing from inexpensive bandwidth to expensive bandwidth. As the number of carriers decreases under financial hardships, you will find that BW once again becomes expensive, and that the management of bandwidth and true network QoS will need to be achieved.

Between the combination of protocols and policy management, true QoS is a realistic goal that can be achieved in data networks.

Answer the following questions.

1.  List two methods for mapping ToS to MPLS services.

2.  Explain the use of RED and its effect on traffic shaping.

3.  Explain the challenges of mapping DiffServ to EXP bits.

Answers

1.  There are two methods: the L-LSP and the E-LSP methods.

2.  Random Early Detection (RED) is a threshold monitoring system. Traffic between the lower and upper threshold is eligible to be randomly discarded. All traffic over the upper threshold is discarded and traffic below the lower threshold is not discarded. RED takes traffic as shown in Figure 8.20 and shapes it to look like Figure 8.21. RED is an effective method to keep routers out of the non-linear mode of operations.

3.  DiffServ has six usable bits: three bits are used for class and three are used for drop precedence. MPLS uses 3 bits for EXP field. It is difficult to map 6 bits into a three-bit field.

Going Further

197

Page 198: MPLS Training Guide Book

www.cis.ohio-state.edu/~jain/refs/ipqs_ref.htm

qos.ittc.ukans.edu/

www.ces.net/project/qosip/

www.allot.com/html/products_netenforcer_sp.shtm

Appendix A: Answer Key for Chapter Exercises

198

Page 199: MPLS Training Guide Book

Chapter 1 The Fundamentals of MPLS Networks and Data Flow

Exercise 1.1: LER and Granularity

In an MPLS network, the LERs serve as quality of service (QoS) decision points. One method to establish these policies is to use the port numbers in Layer 4 of a packet The tradeoffs in establishing these policies come from how much granularity is needed versus how manageable the configurations and tables are.

In the first example, we have created an MPLS LER table with three criteria: rules on IP address only, IP and protocol number, and IP protocol and port number.

Additionally, we have established routing paths A–Z, and we call them forward equivalence classes, or FECs. The FEC A paths are the highest-quality paths, and the FEC Z paths are the lowest-quality paths.

The policies use the port numbers to place traffic on particular paths. Port numbers are:

20/21 FTP, 25 E-Mail, 80 HTTP, 443 HTTPS, 520 Routing

1.  Examine the table and determine the table with the most entries:

2.  In Table1. 1, using the IP protocol, and port number sections, how would HTTPS be handled in relationship to HTTP?

3.  Describe a circumstance in which HTTPS should be handled differently from HTTP.

4.  What FEC classification is given to routing?

5.  How could giving the previous classification to routing become a problem?

Answers

1.  The table with the most entries is the table that sorts by IP address, protocol number, and port number.

2.  HTTPS uses FEC A, whereas HTTP uses FEC B. Since HTTPS could produce revenue and is secure it has a higher priority.

3.  HTTPS is given a higher priority because it offers the opportunity for revenue.

4.  Routing is classified as FEC Z (which is the lowest FEC rating).

5.  Routing and label distribution should be given the highest priority in the network; otherwise, packets could be misrouted.

Exercise 1.2: MPLS Data Flow

We find in an MPLS network that data moves from switch to switch using link-specific labels. Switches perform functions based on their switching or cross-connect tables.

These tables contain information such as port in, label in, port out, label out, next router, and instructions. The instructions are simple: “push” (insert a label), “swap” (change labels), and “pop” (remove label).

In this exercise, sample tracing of a packet through an MPLS network, five routers R1–R5 connect networks X and Z. Tables 1.4–1.8 are used to discover the LSPs. Table 1.4 is used for Router 1, Table 1.5 is used for Router

199

Page 200: MPLS Training Guide Book

2, Table 1.6 is used for Router 3, Table 1.7 is used for Router 4, and Table 1.8 is used for Router 5. Each table is different and represents the MPLS routers internal switching table.

In Figure 1.12, we have an example of how data would move in this situation.

In Table 1.4, the packet (being HTTP port 80) enters as native IP/80 where a label (20) is pushed and the packet is sent out of port D. Notice that as the packet traverses the network, it exits Router 1 at port D and enters Router 3 at port B.

In Table 1.6, the label (20) is swapped for label 600, and the packet exits the router at port D, where it is hardwired to port B of R5.

In Table 1.8 (R5), the packet label 600 is popped to deliver a native packet to network Z.

Note that Figure 1.11 reflects the correct labels.

In this exercise, use the switching tables for Routers 1 through 5 and Figures 1.12 and 1.13 to map data flow and labeling across the network. Of course, the tables contain data that is not used for your packet, but they also contain switching data needed for other packets. Use only the data that you need to move your packets. Follow these instructions:

1. Always start with Table 1.4 and follow applications that enter through Interface A.Table 1.4: Switching Table for Router 1

P_In Label In Label Out Port Out Instruction Next Router

IP/80 None 20 D Push R3

IP/25 None 95 B Push R4

IP/20 None 500 C Push R2

2. The decision made by Table 1.4 will lead you to another switching table, depending on the application, port out, and the router out.

3. In Figure 1.12, note that the packet label numbers appear on the drawings. Use Figures 1.13 and 1.14 to indicate the correct label number.

Figure 1.12: Network Trace for HTTP Port Number 80

4. Use Figure 1.13 and Tables 1.4–1.8 to trace e-mail (port 25) through the network, and note the trace on the drawing.

200

Page 201: MPLS Training Guide Book

Figure 1.13: Network Trace for Port 25 E-Mail

Table 1.5: Switching Table for Router 2

P_In Label In Label Out Port Out Instruction Next Router

B 499 700 D Swap R5

B 500 65 C Swap R3

B 501 700 A Swap R9

Table 1.6: Switching Table for Router 3

P_In Label In Label Out Port Out Instruction Next Router

B 20 600 D Swap R5

A 65 650 D Swap R5

B 501 700 A Swap R9

5. Using Figure 1.14 and Tables 1.4–1.8 to trace FTP (port 20) through the network, and note the trace on the drawing

Figure 1.14: Network Trace for Port 20 FTP

201

Page 202: MPLS Training Guide Book

Table 1.7: Switching Table for Router 4

P_In Label In Label Out Port Out Instruction Next Router

B 95 710 D Push R5

A 500 650 D Push R5

B 515 700 D Push R5

Table 1.8: Switching table for Router 5

P_In Label In Label Out Port Out Instruction Next Router

A 500 None D Pop CR

B 600 None D Pop CR

B 650 None D Pop CR

C 710 None D Pop CR

Exercise 1.3: Single Stacked Label Decode

There are several ways to complete this lab. The exercise itself is written in standalone form so that you do not need any products to complete the exercises. Just skip the hands-on block that follows.

Hands-On: Compare and Contrast IP/Ethernet and IP/MPLS/Ethernet

In protocol analyzers, we count bytes from left to right and we start counting from 0. So, if the first byte is said to have a value at offset of 0, the second byte is an offset of one.

1.  Look at Frame 1 in Figure 1.15. What is the value at offset 12 and 13?

Figure 1.15: Frame 1

2.  Look at Frame 1 Figure 1.15. What is the value at offset 14 and 15?

3.  Look at Frame 9 in Figure 1.16. What is the value at offset 12 and 13? Why is this value different? What does it mean?

202

Page 203: MPLS Training Guide Book

Figure 1.16: Frame 9

4.  Look at Frame 9 in Figure 1.16. What is the value at offset 14, 15, 16, 17?

Translate the hex number into binary using the following chart.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1

5.  Determine the values for the following.a. The label b. The experimental bits c. The stack bit d. The TTL value

6.  Look at offsets 18 and 19. What are their values?

7.  Compare the values in questions 5 and 2 above. What do you find interesting about them?

Answers

1.  The value at offset 12 and 13 is 0800 (the next header is IP).

2.  45 CO (IP Version 4 with a 20 byte header and class of service)

3.  8847  (A shim header next).

In Figure 1.15 frame 1, the note indicates that an IP header is next. In Figure 1.16, the note indicates that a shim header MPLS is next.

It means that the frame has been modified to accommodate MPLS.

4.  00  1  1 f

Translate the hex number into binary using the chart below.

203

Page 204: MPLS Training Guide Book

Label E S TTL

0001D 0 1 FF

5. a. 29   b. 0   c. 1   d. 255

6.  45 00

7.  MPLS was inserted and moved the start of the IP header by 32 bits.

Exercise 1.4: Stacked Decode

In this exercise, you will decode and study an MPLS packet used in a tunneling situation where labels are stacked.

There are several ways to complete this exercise. The exercise itself is written in standalone form so that you do not need any products to complete the exercises.

Hands-On: Open the File and Review File Content Review File Content

If you are the “hands-on” type and you want to see MPLS packets on a protocol analyzer, you need the two items of software (Ethereal and the MPLS-basic-cap sample) mentioned in the previous hands-on exercise.

1. From your desktop, go to Start | Programs and click Ethereal. 2. Once Ethereal opens, open the file called MPLS1.cap.3. Wait for the file to open. It will take a few minutes.

The file should look like Figure 1.17. Now let’s review the file content in the following steps.

Figure 1.17: Open MPLS_basic File

1.  Look at Frame 9, as shown in Figure 1.17. Note the values found at offsets 14 to 21. Record them in hex here:

_____ _____ _____ _____ _____ _____ _____ _____

14 15 16 17 18 19 20 21

204

Page 205: MPLS Training Guide Book

2.  Using the following chart, translate the hex number into binary for Label 1 found at offsets 14-17.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1.

128 64 32 16 8 4 2 1. 128 64 32 16 8 4 2 1

3.  What are the values for each of the following for Label 1??a. The label b. The experimental bits c. The stack bit d. The TTL value

4.  Using the following chart, translate the hex number into binary for Label 2 found at offsets 18-21.

5.  What are the values for each of the following for Label 2?a. The label b. The experimental bits c. The stack bit d. The TTL value

6.  Is the stack bit set for Label 1 (offset 14-17)?

7.  Is the stack bit set for Label 2 (offset 18-21)?

8.  Explain why the stack bit may be set differently.

Answers

1.  00   01   20   ff   00   01   01   ff

14 15 16 17 18 19 20 21

2. 

3. a. 18   b. 0   c. 0   d. 255

4.  Label E S TTL

5. a. 16   b. 0   c. 1   d. 255

6.  OFF

7.  ON

8.  The stack bit is turned on to indicate that this is the last header in the stack (or the header closest to the IP header).

205

Page 206: MPLS Training Guide Book

Chapter 2 MPLS Label Distribution

Exercise 2.1: Control – Ordered or Not

In this case study, you will choose the better of two solutions for your network and justify your choice.

1.  Consider the following scenario. Your team is a group of highly paid consultants for a small, developing country. MPLS technology has been selected because of its traffic-engineering capabilities. This system will require traffic engineering to help manage increases in traffic as the network grows.

On the Web, research the benefits and drawbacks of ordered control vs. non-ordered control; then recommend a solution for the LDP protocol. What are your recommendations ?

2.  Now do the following:Answer the following questions. a. List your references.b. List the advantages of ordered control.c. List the disadvantages of ordered control.d. List the advantages of non-ordered control.e. List the dsadvantages of non-ordered control.

Answers

1.  This group exercise and case study were designed for real-time study. The information listed below may change over time, so it is recommended that this research be conducted by each student of this course

Ordered control vs. independent control was not an issue at the time — just as cut-through switches vs. store and forward switches were not issues. The technology has advanced significantly since the start of MPLS. Currently several manufacturers offer independent control for start-up and ordered control after the network links are established

206

Page 207: MPLS Training Guide Book

2. a. http://cell-relay.indiana.edu/mhonarc/mpls/2000-Jan/msg00144.html

www.cis.ohio-state.edu/~jain/talks/ftp/mpls_te/sld018.htm

http://rfc-3353.rfc-index.net/rfc-3353-23.htm

https://dooka.canet4.net/c3_irr/mpls/sld034.htm

http://course.ie.cuhk.edu.hk/~ine3010/lectures/3010_7.ppt b. Traffic engineering c. Speed set-up speed d. Setup speed e. Loss of traffic engineering

Exercise 2.2: Label Distribution

In this exercise, you will work with RFC 3036 and translate LDP messages. Use RFC 3036 to find the correct answers.

1.  Match the correct message type number in hex to the correct title of the message by recording the correct message number in the space provided. Message type numbers available for selection are:

0001, 0100, 0200, 0201, 0300, 0301, 0400, 0401, 0402, 0403, 0404

____  Address Message

____  Address Withdraw

____  Hello

____  Initialization

____  Keep Alive

____  Label Abort Request

____  Label Release

____  Label Request

____  Label Withdraw Message

____  Labels (Series)

____  Notification

2.  Type Length Values (TLVs) are a subset of LDP messages. Match the correct TLV number in hex to the correct title of by recording the correct message number in the space provided.TLV numbers available for selection are:

0101, 0103, 0104, 0201, 0202, 0300, 0400

____  ADDRESS LIST

____  ATM

____  FRAME RELAY

207

Page 208: MPLS Training Guide Book

____  HOP COUNT

____  KEEP ALIVE

____  PATH LINK

____  STATUS

3.  In the hello message in Figure 2.19, fill in the message type number and the TLV number.

Figure 2.19: Hello Message for Exercise 2.2

Answers

1.  See Section 3.7 of RFC 3036.

0300  Address Message

0301  Address Withdraw

0100  Hello

0200  Initialization

0201  Keep Alive

0404  Label Abort Request

0403  Label Release

0401  Label Request

0402  Label Withdraw Message

0400  Labels (Series)

0001  Notification

2.  Please refer to the tables in the following answer explanation.

0101  ADDRESS LIST

0201  ATM

0202  FRAME RELAY

208

Page 209: MPLS Training Guide Book

0103  HOP COUNT

0400  KEEP ALIVE

0104  PATH LINK

0300  STATUS

The following tables are from the reference sheets for RFC 3036. A reference sheet from RFC 3036

Message Name Type Section Title

Notification 0x0001 Notification Message

Hello 0x0100 Hello Message

Initialization 0x0200 Initialization Message

KeepAlive 0x0201 KeepAlive Message

Address 0x0300 Address Message

Address Withdraw 0x0301 Address Withdraw Message

Label Mapping 0x0400 Label Mapping Message

Label Request 0x0401 Label Request Message

Label Withdraw 0x0402 Label Withdraw Message

Label Release 0x0403 Label Release Message

Label Abort Request 0x0404 Label Abort Request Message

Vendor-Private 0x3E00-0x3EFF LDP Vendor-private Extensions

Experimental 0x3F00-0X3FFF LDP Experimental Extensions

3. TLV Summary

TLV Type Section Title

FEC 0x0100 FEC TLV

Address List 0x0101 Address List TLV

Hop Count 0x0103 Hop Count TLV

Path Vector 0x0104 Path Vector TLV

Generic Label 0x0200 Generic Label TLV

ATM Label 0x0201 ATM Label TLV

Frame Relay Label 0x0200 Frame Relay Label TLV

Status 0x0300 Status TLV

Extended Status 0x0301 Notification Message

Returned PDU 0x0302 Notification Message

209

Page 210: MPLS Training Guide Book

TLV Summary

TLV Type Section Title

Returned Message 0x0303 Notification Message

Common Hello 0x0400 Hello Message

IPv4 Transport Address 0x0401 Hello Message

Configuration 0x0402 Hello Message

IPv6 Transport Address 0x0403 Hello Message

Common Session 0x0500 Initialization Message

ATM Session Parameters 0x0501 Initialization Message

Frame Relay Session 0x0502 Initialization Message

Label Request 0x0600 Label Mapping Message

Vendor-Private 0x3E00-0X3EFF LDP Vendor-private Extensions

Chapter 3

Exercise 3.1 Decode the RSVP-TE message

There are several ways to complete this lab. The exercise itself is written in standalone form so that you do not need any products to complete the exercises.

RSVP Path Request

In this portion of the lab, you will review RSVP-TE and look for the path and the label request.

1.  Look at Frame 3 in 3.25.

Figure 3.25: RSVP Overview

210

Page 211: MPLS Training Guide Book

2.  Find and highlight the strict routing path in Figure 3.26.

Figure 3.26: RSVP Detail

3.  For what type of traffic is the label requested?

4.  What is the C-Type on the request?

Answers

1. 

2.  210.0.0.2

204.0.0.1

207.0.0.1

202.0.0.1

201.0.0.1

200.0.0.1

16.2.2.2

3.  19 Label Request Object

4.  1

RSVP Path Request

In this portion of the lab, you will review RSVP-TE and look for both the RSVP reservation type and the assigned label.

1. Look at Frame 4 in Figure 3.28 and in detail in Figure 3.29.

211

Page 213: MPLS Training Guide Book

Chapter 5 MPLS Traffic Engineering

Exercise 5.1: Traffic Engineering

The customer has four classifications of traffic: real-time voice and video, VoIP, time-critical data traffic, and non-time critical data traffic. The customer currently has three T-1 lines that run at 1.555 Mbs: The first T-1 has the best SLA with 9000 hours MTBF and an MTTR of two minutes. The POC is .0001. The second T-1 is the second grade of traffic with 6,000 hours MTBF and an MTTR of 5 minutes. The

POC is .001. The third T-1 is the third grade of traffic with 4000 hours MTBF and an MTTR of 15 minutes. The POC

is .01.

Note: Use the related traffic engineering RFCs, Erlang tables, and necessary reference documents found on the Web.

Note: These answers are calculated using the Erlang B and C calculators, which can be found at www.erlang.com/calculator/erlb

Part 1: Voice

The customer has 200 voice circuits with a mean call duration of 10 minutes and peak traffic of 35 simultaneous calls. They require a POC of .001, with an MTBF of 7,000 hours and an MTTR of less than five minutes.

1.  How many voice channels are required?

Answers

1.  BHT = Average call duration (s) * Calls per hour/3600 10 minutes = 600 seconds 600 * 35 /3600 = 5.8 Erlangs

Use the Erlang B calculator www.erlang.com/calculator/erlb/

For POC .01 to determine that 9 lines are needed

213

Page 214: MPLS Training Guide Book

Because of the MTBF the first T-1 is required. So 1⁄2 of the T1 would be needed for standard PCM coding.

Nine (9) voice channels are required.

Part 2: Video

The video used for videoconference uses one voice channel for each video channel. For one videoconference, two channels are required. The company expects not to exceed the need for 5 simultaneous videoconferences with an average duration of 1.5 hours.

They require a POC of .01, with an MTBF of 700 hours and an MTTR of less than ten minutes.

1.  How many video channels are required?

2.  How much bandwidth is required?

Answers

1.  Video conference = 1 channel for voice and 1 for Video 2 channels required for each conference

Up to 5 conferences so the calculation is 5 * 2 voice channels

That is (5* 2) * 64K or 640K

Only five (5) video channels are required; however, in this case we combine voice with video for a total of two (2) channels per videoconference for a total of ten (10) channels.

2.  10* 64 K or 640K

Part 3: Streaming Data

Streaming data is used for real-time stock market reports, news feeds, and some limited VoIP. The data streams consist of a data stream of 5Kbps for the stock market for 8.5 hours a day (business hours). The news feed is 15Kbps, and the average VoIP calls are 16K with no more than 10 VoIP calls connected simultaneously. The average call duration of the VoIP calls is 2 hours.

They require a POC of .01, with an MTBF of 6,000 hours and an MTTR of less than five minutes.

1.  How much bandwidth is required?

Answers

1.  5 k data stream + 15k news + 160k VoIP = 180K

Part 4: Time-Critical Data

The company uses time-critical data for critical business information. They connect using SAP, Oracle, and PeopleSoft software in their daily business requirements.

214

Page 215: MPLS Training Guide Book

The sessions are relatively short, with an average duration of 5 minutes. The average amount of data sent per session is 250K. This is user data and does not include overhead.

There are 2,000 user terminals, with the average call duration being 5 minutes. The average number of calls per terminal operational hours is five. (Assume an eight-hour day.)

The company requires a POC of .001, with an MTBF of 7000 hours and an MTTR of less than 5 minutes.

1.  What is the required bandwidth?

Answers

1.  In order to solve this problem, several factors must be calculated . Let’s say that the overhead is 20 percent, then the 250K per session becomes 300K over 5 minutes.

There are 2000 terminals at five (5) sessions per hour or 10,000 sessions per hour using 1K per session. The bandwidth required would be 10,000K.

Average call duration (s) X Calls per hour / 3,600 =? 300 X 10,000 /3,600 = 833 channels at 1K

Part 5: Non-time Critical Data

The company uses e-mail and Web browsers to conduct daily business. It has 14,000 employees, each with access to the Internet and e-mail accounts. This data is not necessarily time sensitive; however, it is critical to operations.

The average employee checks his or her e-mail four times a day for 10 minutes per time.

The average employee surfs the Web five times a day for 5 minutes at a time, loading less than 150Kbps of data per hour per active connection.

The company requires a POC of .05, with an MTBF of 2,000 hours and an MTTR of less than 20 minutes.

1.  How many kilobytes of bandwidth are required to support the nontime-sensitive data?

Answers

1.  Assume that e-mails are primarily rapid loading and that most of the 10 minutes is used for the end users reading their e-mail. Since it is not stated, let’s assume that each e-mail takes 300K for one minute. The number of calls is 4 times a day or .5 times per workday hour.

14,000 * .5 * 60 seconds /3600 = 1.9

Rounding up let’s allocate 2 channels of 300K for e-mail.

Surfing-the-web time does not directly translate to network use. Web pages are loaded within seconds while the users may spend several minutes reading the page.

Since, HTTP use may create less of a load than e-mail, let’s conservatively estimate 3 channels of 300K for HTTP.

215

Page 216: MPLS Training Guide Book

They require a POC of .05, with an MTBF of 2000 hours and an MTTR of less than 20 minutes.

We have calculated best guess estimates; a better method would be to measure the bandwidth actually used for HTTP and e-mail.

Using this best guess estimate, we should reserve 5 channels of 300K for e-mail.

Part 6: Review Questions

Answer the following questions.

1.  What is the maximum bandwidth required to support all the customer’s requirements?

2.  How would you classify the traffic?

3.  Explain your provisioning and over-provisioning strategies for each classification of traffic.

4.  Describe how you would allocate the traffic to the different channels. Show your calculations.

Answers

1.  Voice (9 voice channels) round up to 10 channels 640K

Video conference traffic     640K

Data Streaming     5 K data stream + 15K news feed + 160K VoIP     180K

Real-time data 833K

Non-real time 5 channels at 300 K     15K

2.  Each T1 = 1.5 Megs or 1544K

1st T-1: Voice and video conference and data streaming will fit well in to the first T1.

2ndT-1: Real time data of 833K for the second T1.

3rdT-1: The non-real time data fits nicely into the third T1.

3.  Services are allocated to the T1 based upon traffic analysis, as well as the MTBF and MTRT requirements.

4.  Shown in the previous answers.

216

Page 217: MPLS Training Guide Book

Chapter 7 Virtual Private Networks and MPLS

Exercise 7.1: Case Study 1

In this case study, your team has been asked to explore the MPLS VPN options for a point of sales service.

The network structure is shown in Figure 7.39. Note that the VAST terminals use 64K (slotted aloha) as an upstream link; they receive 256 K of data streamed down from headquarters. This network is a hub-and-spoke design. The network uses an SNA-polled protocol, equipped with protocol spoofing to reduce polling over the satellite link.

Figure 7.39: Case Study 1

The satellite lease period will be renewable in 60 days, with an increase of service fees by 60 percent. The customer is looking at an MPLS VPN as an alternative, cost-saving service.

Please answer the following questions.

1.  Is MPLS VPN service appropriate, and if so, why?

2.  Which MPLS VPN service is recommended?

3.  How do you justify your recommendation?

Answers

217

Page 218: MPLS Training Guide Book

1.  MPLS VPN is best suited for fully meshed network; a cost analysis would determine whether MPLS could produce a savings.

2.  None

3.  No recommendations

Exercise 7.2: Case Study 2

Rag-Tag Books has asked you to assist them in converting from their full-mesh ATM network (shown in Figure 7.45) to an MPLS VPN. Each store has four connections. They are running a variety of ATM traffic, with uncommitted information rate comprising most of their traffic. However, QoS is required for the 10 percent of their traffic that runs the SAP database.

Figure 7.45: Case Study 2

1.  Is MPLS VPN service appropriate, and if so, why?

2.  Which MPLS VPN service is recommended?

3.  How do you justify your recommendation?

Answers

1.  Fully meshed networks could produce a cost justification for MPLS service.

2.  In order to recommend a VPN service, you would need to know whether the interfaces are Layer 2 or Layer 3. Additional information is needed

3.  Further study is needed to make a VPN recommendation. No recommendations were made because the interfaces and requirements ere not clearly defined.

218

Page 219: MPLS Training Guide Book

219