Transcript
Page 1: London Internet Exchange Point Update

London Internet ExchangeLondon Internet ExchangePoint UpdatePoint Update

Keith Mitchell, Executive Chairman

NANOG15 Meeting

Denver, Jan 1999

Page 2: London Internet Exchange Point Update

LINX UpdateLINX Update

• LINX now has 63 members

• Second site now in use

• New Gigabit backbone in place

• Renumbered IXP LAN

• Some things we have learned!

• Statistics

• What’s coming in 1999

Page 3: London Internet Exchange Point Update

What is the LINX ?What is the LINX ?

• UK National IXP

• Not-for-profit co-operative of ISPs

• Main aim to keep UK domestic Internet traffic in UK

• Increasingly keeping EU traffic in EU

Page 4: London Internet Exchange Point Update

LINX StatusLINX Status

• Established Oct 94 by 5 member ISPs

• Now has 7 FTE dedicated staff• Sub-contracts co-location to 2

neutral sites in London Docklands:• Telehouse• TeleCity

• Traffic doubling every 4-6 months !

Page 5: London Internet Exchange Point Update

LINX membershipLINX membership

• Now totals 63• +10 since Oct 98• Recent UK

members:• RedNet• XTML• Mistral• ICLnet• Dialnet

• Recent non-UK members:• Carrier 1• GTE• Above Net• Telecom Eireann• Level 3

Page 6: London Internet Exchange Point Update

LINX MembersLINX Members by Country by Country

33

14

53 1 1 1 1 1 1

UK COM/US DE IE

SE CA FR RU

DK EU/CH

Page 7: London Internet Exchange Point Update

Second Site Second Site

• Existing Telehouse site full until 99Q3 extension ready

• TeleCity is new dedicated co-lo facility, 3 miles from Telehouse

• Awarded LINX contract by open tender (8 submissions)

• LINX has 16-rack suite

• Space for 800 racks

Page 8: London Internet Exchange Point Update

Second SiteSecond Site

• LINX has diverse dark fibre between sites (5km)

• Same switch configuration as Telehouse site

• Will have machines to act as hot backups for the servers in Telehouse

• Will have a K.root server behind a transit router soon

Page 9: London Internet Exchange Point Update

LINX Traffic IssuesLINX Traffic Issues

• Bottleneck was inter-switch link between Catalyst 5000s• Cisco FDDI could no longer cope• 100baseT nearly full

• Needed to upgrade to Gigabit backbone within existing site 98Q3

Page 10: London Internet Exchange Point Update

Gigabit Switch OptionsGigabit Switch Options

• Looked at 6 vendors:• Cabletron/Digital, Cisco, Extreme,

Foundry, Packet Engines, Plaintree

• Some highly cost-effective options available

• But needed non-blocking, modular, future-proof equipment, not workgroup boxes

Page 11: London Internet Exchange Point Update

Old LINX InfrastructureOld LINX Infrastructure

• 5 Cisco Switches:• 2 x Catalyst 5000, 3 x Catalyst 1200

• 2 Plaintree switches• 2 x WaveSwitch 4800

• FDDI backbone• Switched FDDI ports• 10baseT & 100baseT ports• Media convertors for fibre ether

(>100m)

Page 12: London Internet Exchange Point Update

Old LINX TopologyOld LINX Topology

Page 13: London Internet Exchange Point Update

New InfrastructureNew Infrastructure

• Catalyst and Plaintree switches no longer in use• Catalyst 5000s appeared to have

broadcast scaling issues regardless of Supervisor Engine

• Plaintree switches had proven too unstable and unmanageable

• Catalyst 1200s at end of useful life

Page 14: London Internet Exchange Point Update
Page 15: London Internet Exchange Point Update

New InfrastructureNew Infrastructure

• Packet Engines PR-5200• Chassis based 16 slot switch• Non-blocking 52Gbps backplane• Used for our core, primary switches• One in Telehouse, one in TeleCity• Will need a second one in

Telehouse within this quarter• Supports 1000LX, 1000SX, FDDI

and 10/100 ethernet

Page 16: London Internet Exchange Point Update

New InfrastructureNew Infrastructure

• Packet Engines PR-1000:• Small version of PR-5200• 1U switch; 2x SX and 20x 10/100• Same chipset as 5200

• Extreme Summit 48:• Used for second connections• Gives vendor resiliency• Excellent edge switch - low cost per

port and• 2x Gigabit, 48x 10/100 ethernet

Page 17: London Internet Exchange Point Update

New InfrastructureNew Infrastructure

• Topology changes:• Aim to be able to have major

failure in one switch without affecting member connectivity

• Aim to have major failures on inter-switch links with out affecting connectivity

• Ensure that inter-switch connections are not bottlenecks

Page 18: London Internet Exchange Point Update

New backboneNew backbone

• All primary inter-switch links are now gigabit

• New kit on order to ensure that all inter-switch links are gigabit

• Inter-switch traffic minimised by keeping all primary and all backup traffic on their own switches

Page 19: London Internet Exchange Point Update

IXP Switch FuturesIXP Switch Futures

• Vendor claims of 1000baseProprietary 50km+ range are interesting

• Need abuse prevention tools:• port filtering, RMON

• Need traffic control tools:• member/member bandwidth

limiting and measurement

Page 20: London Internet Exchange Point Update

Address TransitionAddress Transition

• Old IXP LAN was 194.68.130/24

• New allocation 195.66.224/19

• New IXP LAN 195.66.224/23

• “Striped” allocation on new LAN• 2 addresses per member, same

last octet

• About 100 routers involved

Page 21: London Internet Exchange Point Update

Address Migration PlanAddress Migration Plan

• Configured new address(es) as secondaries

• Brought up peerings with their new addresses

• When all peers are peering on new addresses, stopped old peerings

• Swap over the secondary to the primary IP address

Page 22: London Internet Exchange Point Update

Address Migration PlanAddress Migration Plan

• Collector dropped peerings with old 194.68.130.0/24 addresses

• Anyone not migrated at this stage lost direct peering with AS5459

• Eventually, old addresses no longer in use

Page 23: London Internet Exchange Point Update

What we have learnedWhat we have learned

• ... the hard way!• Problems after renumbering

• Some routers still using /24 netmask• Some members treating the /23

network as two /24s• Big problem if proxy ARP is

involved!

• Broadcast traffic bad for health• We have seen >50 ARP requests

per second at worst times.

Page 24: London Internet Exchange Point Update

ARP Scaling IssuesARP Scaling Issues

• Renumbering led to lots of ARP requests for unused IP addresses

• ARP no-reply retransmit timer fixed time-out

• Maintenance work led to groups of routers going down/up together

Synchronised “waves” of ARP requests

Page 25: London Internet Exchange Point Update

New MoU ProhibitionsNew MoU Prohibitions

• Proxy ARP

• ICMP redirects

• Directed broadcasts

• Spanning Tree

• IGP broadcasts

• All non-ARP MAC layer broadcasts

Page 26: London Internet Exchange Point Update

StatisticsStatistics

• LINX total traffic• 300 M/sec avg, 405 M/sec peak

• Routing table• 9,200 out of 55,000 routes

• k.root-servers• 2.2 Mbit/sec out, 640 Kbit/sec in

• nic.uk• 150 Kbit/sec out, 60 Kbit/sec in

Page 27: London Internet Exchange Point Update

Statistics and looking glass at http://www2.linx.net/

Page 28: London Internet Exchange Point Update

Things planned for ‘99Things planned for ‘99

• Infrastructure spanning tree implementation

• Completion of Stratum-1 NTP server

• Work on an ARP server• Implementation of route server• Implementation of RIPE NCC test

traffic box


Top Related