london internet exchange point update
Post on 31-Dec-2015
22 Views
Preview:
DESCRIPTION
TRANSCRIPT
London Internet ExchangeLondon Internet ExchangePoint UpdatePoint Update
Keith Mitchell, Executive Chairman
NANOG15 Meeting
Denver, Jan 1999
LINX UpdateLINX Update
• LINX now has 63 members
• Second site now in use
• New Gigabit backbone in place
• Renumbered IXP LAN
• Some things we have learned!
• Statistics
• What’s coming in 1999
What is the LINX ?What is the LINX ?
• UK National IXP
• Not-for-profit co-operative of ISPs
• Main aim to keep UK domestic Internet traffic in UK
• Increasingly keeping EU traffic in EU
LINX StatusLINX Status
• Established Oct 94 by 5 member ISPs
• Now has 7 FTE dedicated staff• Sub-contracts co-location to 2
neutral sites in London Docklands:• Telehouse• TeleCity
• Traffic doubling every 4-6 months !
LINX membershipLINX membership
• Now totals 63• +10 since Oct 98• Recent UK
members:• RedNet• XTML• Mistral• ICLnet• Dialnet
• Recent non-UK members:• Carrier 1• GTE• Above Net• Telecom Eireann• Level 3
LINX MembersLINX Members by Country by Country
33
14
53 1 1 1 1 1 1
UK COM/US DE IE
SE CA FR RU
DK EU/CH
Second Site Second Site
• Existing Telehouse site full until 99Q3 extension ready
• TeleCity is new dedicated co-lo facility, 3 miles from Telehouse
• Awarded LINX contract by open tender (8 submissions)
• LINX has 16-rack suite
• Space for 800 racks
Second SiteSecond Site
• LINX has diverse dark fibre between sites (5km)
• Same switch configuration as Telehouse site
• Will have machines to act as hot backups for the servers in Telehouse
• Will have a K.root server behind a transit router soon
LINX Traffic IssuesLINX Traffic Issues
• Bottleneck was inter-switch link between Catalyst 5000s• Cisco FDDI could no longer cope• 100baseT nearly full
• Needed to upgrade to Gigabit backbone within existing site 98Q3
Gigabit Switch OptionsGigabit Switch Options
• Looked at 6 vendors:• Cabletron/Digital, Cisco, Extreme,
Foundry, Packet Engines, Plaintree
• Some highly cost-effective options available
• But needed non-blocking, modular, future-proof equipment, not workgroup boxes
Old LINX InfrastructureOld LINX Infrastructure
• 5 Cisco Switches:• 2 x Catalyst 5000, 3 x Catalyst 1200
• 2 Plaintree switches• 2 x WaveSwitch 4800
• FDDI backbone• Switched FDDI ports• 10baseT & 100baseT ports• Media convertors for fibre ether
(>100m)
Old LINX TopologyOld LINX Topology
New InfrastructureNew Infrastructure
• Catalyst and Plaintree switches no longer in use• Catalyst 5000s appeared to have
broadcast scaling issues regardless of Supervisor Engine
• Plaintree switches had proven too unstable and unmanageable
• Catalyst 1200s at end of useful life
New InfrastructureNew Infrastructure
• Packet Engines PR-5200• Chassis based 16 slot switch• Non-blocking 52Gbps backplane• Used for our core, primary switches• One in Telehouse, one in TeleCity• Will need a second one in
Telehouse within this quarter• Supports 1000LX, 1000SX, FDDI
and 10/100 ethernet
New InfrastructureNew Infrastructure
• Packet Engines PR-1000:• Small version of PR-5200• 1U switch; 2x SX and 20x 10/100• Same chipset as 5200
• Extreme Summit 48:• Used for second connections• Gives vendor resiliency• Excellent edge switch - low cost per
port and• 2x Gigabit, 48x 10/100 ethernet
New InfrastructureNew Infrastructure
• Topology changes:• Aim to be able to have major
failure in one switch without affecting member connectivity
• Aim to have major failures on inter-switch links with out affecting connectivity
• Ensure that inter-switch connections are not bottlenecks
New backboneNew backbone
• All primary inter-switch links are now gigabit
• New kit on order to ensure that all inter-switch links are gigabit
• Inter-switch traffic minimised by keeping all primary and all backup traffic on their own switches
IXP Switch FuturesIXP Switch Futures
• Vendor claims of 1000baseProprietary 50km+ range are interesting
• Need abuse prevention tools:• port filtering, RMON
• Need traffic control tools:• member/member bandwidth
limiting and measurement
Address TransitionAddress Transition
• Old IXP LAN was 194.68.130/24
• New allocation 195.66.224/19
• New IXP LAN 195.66.224/23
• “Striped” allocation on new LAN• 2 addresses per member, same
last octet
• About 100 routers involved
Address Migration PlanAddress Migration Plan
• Configured new address(es) as secondaries
• Brought up peerings with their new addresses
• When all peers are peering on new addresses, stopped old peerings
• Swap over the secondary to the primary IP address
Address Migration PlanAddress Migration Plan
• Collector dropped peerings with old 194.68.130.0/24 addresses
• Anyone not migrated at this stage lost direct peering with AS5459
• Eventually, old addresses no longer in use
What we have learnedWhat we have learned
• ... the hard way!• Problems after renumbering
• Some routers still using /24 netmask• Some members treating the /23
network as two /24s• Big problem if proxy ARP is
involved!
• Broadcast traffic bad for health• We have seen >50 ARP requests
per second at worst times.
ARP Scaling IssuesARP Scaling Issues
• Renumbering led to lots of ARP requests for unused IP addresses
• ARP no-reply retransmit timer fixed time-out
• Maintenance work led to groups of routers going down/up together
Synchronised “waves” of ARP requests
New MoU ProhibitionsNew MoU Prohibitions
• Proxy ARP
• ICMP redirects
• Directed broadcasts
• Spanning Tree
• IGP broadcasts
• All non-ARP MAC layer broadcasts
StatisticsStatistics
• LINX total traffic• 300 M/sec avg, 405 M/sec peak
• Routing table• 9,200 out of 55,000 routes
• k.root-servers• 2.2 Mbit/sec out, 640 Kbit/sec in
• nic.uk• 150 Kbit/sec out, 60 Kbit/sec in
Statistics and looking glass at http://www2.linx.net/
Things planned for ‘99Things planned for ‘99
• Infrastructure spanning tree implementation
• Completion of Stratum-1 NTP server
• Work on an ARP server• Implementation of route server• Implementation of RIPE NCC test
traffic box
top related