paper optima:scalable,multi-stage,640-gbit/satm...

9
1488 IEICE TRANS. COMMUN., VOL.E83–B, NO.7 JULY 2000 PAPER OPTIMA: Scalable, Multi-Stage, 640-Gbit/s ATM Switching System Based on Advanced Electronic and Optical WDM Technologies Naoaki YAMANAKA , Eiji OKI , Seisho YASUKAWA , Ryusuke KAWANO , and Katsuhiko OKAZAKI , Regular Members SUMMARY An experimental 640-Gbit/s ATM switching system is described. The switching system is scalable and quasi- non-blocking and uses hardware self-rearrangement in a three- stage network. Hardware implementation results for the switch- ing system are presented. The switching system is fabricated using advanced 0.25-µm CMOS devices, high-density multi-chip- module (MCM) technology, and optical wavelength-division- multiplexing (WDM) interconnection technology. A scalable 80- Gbit/s switching module is fabricated in combination with a de- veloped scalable-distributed-arbitration technique, and a WDM interconnection system that connects multiple 80-Gbit/s switch- ing modules is developed. Using these components, an experi- mental 640-Gbit/s switching system is partially constructed. The 640-Gbit/s switching system will be applied to future broadband ATM networks. key words: 1. Introduction Recent research and development of ATM switching systems has been intensive and commercial-level prod- ucts have been put on the market. At present, ATM switching systems are mostly applied to high-speed data communication services, but services such as in- teractive shopping, high-definition television (HDTV), and remote medical diagnosis, have been tested in many places. For these services, an ATM switching system offering throughput of several tens of gigabits per sec- ond has been developed [1]. However, its operating speed must be increased to support future traffic de- mand. Toward multimedia very-high-speed networks, many institutes are aiming at ATM switching systems with throughput of 160 Gbit/s or more [2]–[5]. Fortu- nately, VLSI technologies have dramatically improved, so the next advance needed is an innovative packaging technology that can yield devices capable of switching very-high-speed signals at speeds of the order of giga- bits per second and that are cost effective [2]. The latest ATM system employs advanced VLSIs and multi-chip modules (MCMs) [3]–[5], with MCMs being one of the keys to making new switching systems [6]. Manuscript received August 26, 1999. Manuscript revised January 28, 2000. The authors are with NTT Network Service Systems Laboratories, Musashino-shi, 180-8585 Japan. To make a Tbit/s-class ATM switching system, ex- pandability or scalability is also a very important char- acteristic; a large switch is constructed by adding iden- tical unit switches [7]–[9]. In addition, non-blocking performance is essential. More precisely, software can- not manage every connection, because ATM does not have a deterministic bandwidth. Users can send cells dynamically, so conventional circuit-switched designs are not suitable. To meet these requirements, we have proposed a quasi-non-blocking optically interconnected distributed multi-stage Tbit/s ATM switching network architec- ture, called OPTIMA [10]. It is based on highly statis- tical wavelength division multiplexed (WDM) optical links and self-rearranging quasi-non-blocking switches. The switching system is scalable, and the non-blocking characteristic is achieved without the use of compli- cated software control and management; instead, it uses hardware self-rearrangement. This paper describes an experimental 640-Gbit/s OPTIMA switching system and presents hardware im- plementation results. The system uses a large-scale, easily expandable, ATM switch LSI using advanced 0.25-µm CMOS devices, high-density MCM technology for packing, and optical wavelength routing intercon- nection technology. Fig. 1 Operating-speed limit and breakthrough technologies.

Upload: others

Post on 26-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • 1488IEICE TRANS. COMMUN., VOL.E83–B, NO.7 JULY 2000

    PAPER

    OPTIMA: Scalable, Multi-Stage, 640-Gbit/s ATM

    Switching System Based on Advanced Electronic

    and Optical WDM Technologies

    Naoaki YAMANAKA†, Eiji OKI†, Seisho YASUKAWA†, Ryusuke KAWANO†,and Katsuhiko OKAZAKI†, Regular Members

    SUMMARY An experimental 640-Gbit/s ATM switchingsystem is described. The switching system is scalable and quasi-non-blocking and uses hardware self-rearrangement in a three-stage network. Hardware implementation results for the switch-ing system are presented. The switching system is fabricatedusing advanced 0.25-µm CMOS devices, high-density multi-chip-module (MCM) technology, and optical wavelength-division-multiplexing (WDM) interconnection technology. A scalable 80-Gbit/s switching module is fabricated in combination with a de-veloped scalable-distributed-arbitration technique, and a WDMinterconnection system that connects multiple 80-Gbit/s switch-ing modules is developed. Using these components, an experi-mental 640-Gbit/s switching system is partially constructed. The640-Gbit/s switching system will be applied to future broadbandATM networks.key words: ATM, switch, multi-chip module, WDM, intercon-nection

    1. Introduction

    Recent research and development of ATM switchingsystems has been intensive and commercial-level prod-ucts have been put on the market. At present, ATMswitching systems are mostly applied to high-speeddata communication services, but services such as in-teractive shopping, high-definition television (HDTV),and remote medical diagnosis, have been tested in manyplaces. For these services, an ATM switching systemoffering throughput of several tens of gigabits per sec-ond has been developed [1]. However, its operatingspeed must be increased to support future traffic de-mand. Toward multimedia very-high-speed networks,many institutes are aiming at ATM switching systemswith throughput of 160Gbit/s or more [2]–[5]. Fortu-nately, VLSI technologies have dramatically improved,so the next advance needed is an innovative packagingtechnology that can yield devices capable of switchingvery-high-speed signals at speeds of the order of giga-bits per second and that are cost effective [2]. The latestATM system employs advanced VLSIs and multi-chipmodules (MCMs) [3]–[5], with MCMs being one of thekeys to making new switching systems [6].

    Manuscript received August 26, 1999.Manuscript revised January 28, 2000.

    †The authors are with NTT Network Service SystemsLaboratories, Musashino-shi, 180-8585 Japan.

    To make a Tbit/s-class ATM switching system, ex-pandability or scalability is also a very important char-acteristic; a large switch is constructed by adding iden-tical unit switches [7]–[9]. In addition, non-blockingperformance is essential. More precisely, software can-not manage every connection, because ATM does nothave a deterministic bandwidth. Users can send cellsdynamically, so conventional circuit-switched designsare not suitable.

    To meet these requirements, we have proposed aquasi-non-blocking optically interconnected distributedmulti-stage Tbit/s ATM switching network architec-ture, called OPTIMA [10]. It is based on highly statis-tical wavelength division multiplexed (WDM) opticallinks and self-rearranging quasi-non-blocking switches.The switching system is scalable, and the non-blockingcharacteristic is achieved without the use of compli-cated software control and management; instead, it useshardware self-rearrangement.

    This paper describes an experimental 640-Gbit/sOPTIMA switching system and presents hardware im-plementation results. The system uses a large-scale,easily expandable, ATM switch LSI using advanced0.25-µm CMOS devices, high-density MCM technologyfor packing, and optical wavelength routing intercon-nection technology.

    Fig. 1 Operating-speed limit and breakthrough technologies.

  • YAMANAKA et al.: OPTIMA: SCALABLE, MULTI-STAGE, 640-Gbit/s ATM SWITCHING SYSTEM1489

    Fig. 2 Logical structure of OPTIMA.

    2. OPTIMA: Tbit/s ATM Switch Architecture

    The logical structure of OPTIMA is shown in Fig. 2.OPTIMA has two basic features.

    • The route of a virtual connection is randomly setto the second-stage switch. This makes the trafficvalue distribution more even at the second-stageswitch.

    • Self-rearrangement is triggered by excessivesecond-stage switch traffic loads. All rearrange-ment is achieved by hardware rather than by soft-ware.

    The first-stage switches distribute all virtual con-nection routes randomly; this roughly equalizes theloads of the second-stage switches. Because highly sta-tistical large-capacity links are used, the loads of thesecond-stage switches are balanced. For example, fortraffic with a peak rate of 50Mbit/s, the probabilityof overload is less than 10−4 for 10-Gbit/s links at aload of 0.9 [10]. To prevent overloading, OPTIMA usesa self-rearrangement function [11]. Overloading is de-tected by observing the second-stage buffers as shownin Fig. 2. The traffic observation is based on a recursivelow-pass filter technique [12]. If overloading starts tooccur, the self-rearrangement function of OPTIMA istriggered and a back-pressure signal is sent over elec-trical wires to force a rearrangement of the connectionsfrom the first-stage switch. To ensure the cell-sequenceintegrity of ATM connections, the first-stage has a stop-queue mechanism [10]. Since extra electrical wires areused for feedback signals, the delay of the feedbacktransmission depends only on the speed of the wiredlogic. Thus, the self-rearrangement was achieved byonly using the hardware.

    Fig. 3 Physical implementation of OPTIMA.

    3. Implementation of 640-Gbit/s ATM Switch-ing System

    3.1 Target

    VLSI technology continues to advance every year andswitching-system throughput may continue to rise stepby step. However, there are some strong limitations.As can be seen in Fig. 1, the limiting factors in conven-tional 160-Gbit/s electrical switching throughput arecooling and interconnection [2]. Advanced VLSI tech-nologies, such as 0.25-µm CMOS, can integrate morethan two million gates on one chip and can operate upto the Gbit/s level. Switching chips consume about10W each. The resulting heat load limits us to onechip if we use standard air cooling. This figure showsthese limits. Some sophisticated techniques can be usedto offset them. The electrical interconnection limits aremainly caused by loss, crosstalk, and reflection. On theother hand, the cooling limit is a combination of coolingcapacity and power supply mechanism. Optical inter-connection and a distributed architecture are amongthe breakthroughs needed to overcome these limita-tions. In addition, sophisticated electrical systems areneeded to achieve high-speed data interconnection insmall areas. Obviously, the power density level willcontinue to increase. Another limitation is the use ofparallel electrical buses for interconnection. Such busesare not space efficient and do not support further in-creases in the physical integration level because of theirhigh crosstalk levels.

    3.2 System Configuration

    An experimental 640-Gbit/s OPTIMA switching sys-tem was partially fabricated [13], [14]. A schematicblock diagram of this system is shown in Fig. 3. It con-sists of 80-Gbit/s ATM switching modules and opticalWDM interconnection systems.

    The 80-Gbit/s ATM switching modules use a 40-layer ceramic substrate and 0.25-µm advanced CMOStechnology [16]. In addition, closed-loop-type liquid

  • 1490IEICE TRANS. COMMUN., VOL.E83–B, NO.7 JULY 2000

    cooling is used. For the optical WDM interconnec-tion of the switching modules, a previously developedcompact optical WDM transmitter/receiver module isused with planar light-wave circuits (PLCs). The 80-Gbit/s ATM switching modules are mounted on themotherboard and interconnected by the proposed opti-cal WDM three-stage network. These technologies aredescribed later.

    3.3 Scalable 80-Gbit/s ATM Switching Module

    The switching module for this OPTIMA system notonly must be compact but also must offer scalabilityitself and allow future expansion of the 640-Gbit/s OP-TIMA switching system.

    We fabricated the 80-Gbit/s ATM switching mod-ule using a previously developed distributed arbitrationscheme in order to achieve scalability [17]. This subsec-tion describes our arbitration technique and the switchLSI and also presents implementation results for theswitching module.

    3.3.1 Scalable-Distributed-Arbitration Technique

    To make an expandable switching module, a crosspoint-LSI-type switch architecture is an appropriate choice[15]. Switch LSIs are arranged on a matrix plane. Aconventional crosspoint-LSI-type switch uses ring arbi-tration among switch LSIs. However, this causes prob-lems when the number of switch LSIs N in each rowincreases and output lines are fast. As the output-linespeed increases, the cell time decreases. In a crosspoint-LSI-type switch having a large number of row switchLSIs, ring arbitration among switch LSIs belonging tothe same output port cannot be completed within theshort cell time. Therefore, in conventional switchesbased on ring arbitration, the arbitration time limitsthe output-line speed; ring arbitration must be com-pleted within the cell time [2]. Therefore, a new arbi-tration technique is needed to achieve a scalable high-speed switching system.

    We have developed such an arbitration scheme,called Scalable Distributed Arbitration (SDA) [17].Figure 4 shows its structure. It includes an out-put buffer, a transit buffer, an arbitration-control part(CNTL), and a selector at every output port in theswitch LSI. Note that each output buffer of the switchLSI can be regarded as a crosspoint buffer in the switch-ing system.

    An output buffer sends a request (REQ) to CNTLif it contains at least one cell. A transit buffer storesseveral cells that are sent from either the output bufferof an upper switch LSI or the transit buffer of an upperswitch LSI. The transit-buffer size may be one cell ora few cells. The transit buffer sends REQ to CNTL,like the output buffer, if it contains at least one cell.If the transit buffer is about to become full, it sends a

    Fig. 4 Scalable-distributed-arbitration (SDA) mechanismamong switch LSIs.

    not-acknowledgment (NACK) to the upper CNTL.If there are any REQs and CNTL does not receive

    NACK from the next lower transit buffer, then CNTLselects a cell within one cell time. CNTL determineswhich cell should be sent according to the followingcell-selection rule. The selected cell is sent through aselector to the next lower transit buffer or the outputline.

    The rule selects a cell as follows. If either the out-put buffer or the transit buffer issues a request cell re-lease, the cell in the requesting buffer is selected. Ifboth the output buffer and the transit buffer issue cellreleases, the cell with the larger delay time is selected.The delay time is defined as the time since the cell en-tered the output buffer.

    Thus SDA achieves distributed arbitration at eachswitch LSI. The longest transmission distance of thecontrol signal for arbitration within one cell time is ob-viously the distance between two adjacent LSIs. In theconventional switch, the control signal for ring arbitra-tion must pass through all switch LSIs belonging to thesame output line. For that reason, the arbitration timeof SDA does not depend on the number of LSIs.

    3.3.2 Fabricated ATM Switching Module

    A schematic block diagram of the switching moduleis shown in Fig. 5. Eight switch LSIs and thirty-two

  • YAMANAKA et al.: OPTIMA: SCALABLE, MULTI-STAGE, 640-Gbit/s ATM SWITCHING SYSTEM1491

    Fig. 5 Schematic block diagram of scalable 80-Gbit/s ATMswitching module.

    Fig. 6 Switch LSI micrograph.

    MCM-interface LSIs are mounted on the MCM sub-strate.

    The switch LSIs were fabricated using 0.25-µm CMOS/SIMOX (Separation by Implementation ofOxygen) devices [16]. SDA operation is implementedin the switch LSIs. The switch LSI has a 4× 2 switch-ing function handling input and output line speeds of10Gbit/s. To reduce the number of high-speed I/O pinsand achieve high throughput, the Gbit/s I/O interfaceis constructed with CMOS low-voltage-swing I/O cir-cuits [18], [19]. A 10-Gbit/s cell stream is transmittedby 8 physical lines, each offering 1.25Gbit/s. Figure 6shows a microphotograph of a die with 288-kgate logicand a 209-kbit SRAM. The die size is 16.55×16.55 mm.The power consumption of the switch LSI is about 7W.

    Fig. 7 Tandem-crosspoint (TDXP) switching mechanism inswitch LSI.

    A 4 × 2 switching function is achieved usingtandem-crosspoint (TDXP) switching as shown inFig. 7 [20]. The switch LSI consists of two crossbar-switch planes, which are connected in tandem at everycrosspoint. Even if a cell cannot be transmitted to anoutput port on the first plane, it has a chance to betransmitted on the second plane. Cell transmission isexecuted on each switch plane in a pipelined manner.Therefore, two cells can be transmitted to the sameoutput port within one cell time slot, even though theinternal line speed of each switch plane is equal to theinput/output line speed. Head of line blocking is ef-fectively eliminated and high throughput is achieved.The detail cell-transmission mechanism was presentedin [20]. The fabricated switch LSI shown in Fig. 6 has a

  • 1492IEICE TRANS. COMMUN., VOL.E83–B, NO.7 JULY 2000

    Fig. 8 80-Gbit/s switching module.

    16-cell buffer at each input port and a 128-cell buffer ateach output port and achieves a cell loss ratio of 10−9

    under the heavy traffic load of 0.9.MCM-interface LSIs were fabricated as high-speed

    Si-bipolar devices using the super-self-aligned processtechnology SST [21]. MCM-interface LSIs have serial-to-parallel and parallel-to-serial conversion functionsto convert line speeds between 16 physical lines at625Mbit/s each and 8 physical lines at 1.25Gbit/seach.

    Figure 8 shows an 8 × 8 ATM switching modulewith 80-Gbit/s throughput based on 40-layer ceramicMCM technology. The 40-layer substrate consists of 7signal layers and 33 power supply layers [22]. This mod-ule is 114×160mm. The MCM has several features: theinternal signal transmission line speed is 1.25Gbit/s onthe MCM and the interface speed is 625Mbit/s; thereare 829 high-speed signal I/Os; and there are manypower supply layers and 50-µm ceramic-sheet layers toreduce the impedance of the power supply lines. 80-Gbit/s switching modules can be connected to eachother by using high-density FPC (flexible printed cir-cuit) cables [23] to expand the switching capacity. Notethat, in the OPTIMA 640-Gbit/s switching system, thefabricated 80-Gbit/s switching module is used as a unitswitch, and that the switching module itself is scalable.

    A cold plate is mounted on the back of the MCMand a compact liquid-cooling system, shown in Fig. 9, isused [22]. This system consists of cooling plates, hosesfor the coolant, two valve connectors, a pump, and radi-ators. Each radiator has two cooling elements (thermalresistance 0.09K/W), which are connected in parallelusing high-performance heat pipes. These elements areforced-air cooled. The heat comes into contact withthe radiators through the cooling plates. The coolant,which is pressurized to 4–8 kg/cm2 by the pump, iscooled by radiator 1 after passing the cooling plate ofMCM 1. It continues on and passes the cooling plateof MCM 2, then returns to the pump. The MCM isconnected to a printed-wiring-board by stack-type con-

    Fig. 9 Compact liquid-cooling system.

    nectors [24]. The cooling plate touches the rear side ofthe MCM. It is 150× 95× 8mm

    We fixed a heater to the cooling plate and mea-sured the plate’s temperature at various flow rates. Thethermal resistance fell as the flow rate rose, reaching0.109K/W at 420W.

    3.4 Optical WDM Interconnection

    The 640-Gbit/s switching system uses another impor-tant technique, which is optical WDM routing [25]. Thebasic block diagram of optical routing interconnectionis shown in Fig. 10. Each 80-Gbit/s switch has eight sig-nals of different wavelengths, each carrying 10Gbit/s.A router AWGF (arrayed waveguide filter) performswavelength routing, as shown in Table 1 [27]. In otherwords, all 80-Gbit/s ATM switching modules are inter-connected by different wavelengths.

    To introduce optical WDM technology into switchinterconnection, the WDM interconnection systemmust be simple. The conventional WDM system usesnarrow-channel-spacing WDM, because this increasestransmission capacity and reduces optical loss in along fiber transmission fiber line. But this requiresstrict wavelength control, which makes the WDM sys-tem complicated and restricts the system’s tempera-ture margin. It is better to widen the passband ofthe AWGF to attain a wide temperature system mar-gin and eliminate complex temperature control circuits.However, if the WDM spacing is too wide, this cancause a large frequency error in the optical router, andmake it more difficult to select an optical source becausethe band of the laser oscillation spectra is limited. Asa result, although strict temperature control is relaxedwhen the spacing is wide, the total cost of the opticalinterconnection system may not be decreased. There-fore, appropriate WDM channel spacing should be de-

  • YAMANAKA et al.: OPTIMA: SCALABLE, MULTI-STAGE, 640-Gbit/s ATM SWITCHING SYSTEM1493

    Fig. 10 WDM interconnection.

    Table 1 Routing channel table AWGF.

    termined in order to decrease the total system cost. Wedesigned our WDM interconnection system to use anAWGF with the wide channel spacing of 525GHz (cen-ter: 193.1THz (1555.2 nm)). Figure 11 shows its lossspectra. To select the optimal channel condition, wemust minimize the filter loss and frequency error. Thepassband of this AWGF (160GHz) is much wider thanthat of the conventional AWGF (20–30GHz). Thismeans that this system accepts a temperature rangethat is seven times larger than that of a conventionalsystem, so its temperature circuits can be simpler thanthose of the conventional system. As a result, the totalsystem size can be dramatically reduced.

    The 640-Gbit/s switching system was made as anAWG back-wired board. The system, called the opti-cal wavelength addressing back-wired board, is shownin Fig. 12. AWDM signal is sent from the switch board,and the AWG switches each signal in a cyclic shift reg-ister manner, as described earlier. All the first-stageswitch boards are interconnected to the second-stage

    Fig. 11 Wide-channel-spacing AWGF.

    switch boards.We successfully fabricated very small transmitter

    (TX) and receiver (RX) modules. Figure 13 shows anexternal view of one of them [25], [26]. Both modulesare the same size: 80× 120× 20mm.

    A novel structure based on a high-performanceheat-transfer device and micro-fans is employed [26]. Aheat-conductive plate (2-mm thick) contains a loopedcapillary, which has working fluid inside it [28]. Thiscooling structure enables a compact structure to han-dle high-density heat generation. The cooling systemachieved a heat resistance value of 1.8 degrees C/W,which is sufficient enough for stabilizing the operationof the transceiver and receiver units.

    The main elements of the TX module are anintegrated light source with a multiple-quantum-welldistributed-feedback laser, an electro-absorption mod-

  • 1494IEICE TRANS. COMMUN., VOL.E83–B, NO.7 JULY 2000

    Fig. 12 System image of optical routing interconnection andoptical addressing back-wired board.

    Fig. 13 External views of transmitter or receiver module.

    ulator with a Peltier device for fine temperature control,and a 16:1 MUX. In the WDM system, wavelength sta-bility is critical because the system utilizes differencesin wavelength. As the wavelength is sensitive to tem-perature, adequate temperature control is necessary.The power dissipation of the TX is 9.65W, includingthe Peltier device. The main elements of the RX mod-ule are a clock data recovery circuit, a 1:16 DEMUX,and a word synchronization LSI for ATM cell regen-eration. The power dissipation of the RX is 22.5W.The laser output power was 3.72 dBm. The receiversensitivity was −16.5 dBm at a bit error of 10−11 [25].These modules provide a sufficient power margin whena multiplexer AWGF, a demultiplexer AWGF, and arouter AWGF are used between the TX and RX mod-ules (see Fig. 10). We confirmed error-free operation for

    Fig. 14 Photograph of partially completed 640-Gbit/s ATMswitching system.

    the optical WDM interconnection system at a speed of10Gbit/s.

    We designed the operating speed of a compact in-terconnection system to be 10Gbit/s by optimizing thephase-lock-loop (PLL) frequency and amplifier band-width. If a higher operating speed is needed, higher-performance devices, especially lasers and amplifiers,should be used in the RX and TX modules.

    3.5 Overview of 640-Gbit/s Switching System

    An experimental 640-Gbit/s switching system was par-tially fabricated using the newly developed 80-Gbit/sswitching modules and optical WDM interconnection.Figure 14 shows a photograph of it. The system con-sists of three switching units, which correspond to thefirst-, second-, and third-stage switches. These switch-ing units are arranged in a triangle. Each unit is60× 162× 80 cm.

    4. Conclusions

    An experimental 640-Gbit/s ATM switching system hasbeen developed. It is scalable and quasi-non-blockingand uses hardware self-rearrangement in a three-stagenetwork. It was fabricated using advanced 0.25-µmCMOS devices, high-density MCM technology, and op-tical WDM interconnection technology. A scalable 80-Gbit/s switching module, a key component of the 640-Gbit/s switching system, was fabricated in combinationwith the previously developed SDA technique, and aWDM interconnection system that connects all the 80-Gbit/s switching modules. The 640-Gbit/s switchingsystem will be applicable to future broadband ATMnetworks.

    References

    [1] N. Miyaho, M. Hirano, Y. Takagi, K. Shiomoto, andT. Takahashi, “An ATM switching architecture for first gen-

  • YAMANAKA et al.: OPTIMA: SCALABLE, MULTI-STAGE, 640-Gbit/s ATM SWITCHING SYSTEM1495

    eration of broadband services,” Proc. ISS’92, pp.285–289,1992.

    [2] K. Genda, Y. Doi, K. Endo, T. Kawamura, and S. Suzuki,“A 160Gb/s ATM switching system using an internalspeed-up crossbar switch,” Proc. IEEE GLOBECOM ’94,pp.123–133, 1994.

    [3] E. Munter, J. Parker, and P. Kirkby, “A high capacity ATMswitch based on advanced electronic and optical technolo-gies,” Proc. ISS’95, pp.389–393, 1995.

    [4] Y. Kamigaki, T. Nara, S. Machida, A. Hakata, andK. Yamaguchi, “160Gb/s ATM switching system for pub-lic network,” Proc. IEEE GLOBECOM 96, pp.1380–1387,1996.

    [5] E.P. Rathgeb, “Architecture of a multigigabit ATM coreswitch for the B-ISDN,” IEICE Trans. Commun., vol.E81-B, no.2, pp.251–257, Feb. 1998.

    [6] N. Yamanaka, K. Endo, K. Genda, H. Fukda, T. Kishimoto,and S. Sasaki, “320Gb/s high-speed ATM switching systemhardware technologies based on copper-polyimide MCM,”Proc. IEEE 44th ECTC, pp.776–785, 1994.

    [7] S. Nojima, E. Tsutsui, H. Fukuda, and M. Hashimoto,“Integrated services packet networking using bus matrixswitch,” IEEE J. Sel. Areas Commun., vol.5, no.8, pp.1284–1292, 1987.

    [8] Y. Kato, T. Shimoe, and K. Murakami, “A developmentof a high speed ATM switching LSI,” Proc. IEEE ICC’90,1990.

    [9] M. Collivignarelli, A. Daniele, G. Gallasssi, F. Rosssi,G. Valsecchi, and L. Verri, “System and performance de-sign of the ATM node UT-XC,” Proc. ICC ’94, pp.613–618,1994.

    [10] N. Yamanaka, S. Yasukawa, E. Oki, T. Kawamura, T.Kurimoto, and T. Matsumura, “OPTIMA: Tb/s ATMswitching system architecture based on highly statisticaloptical WDM interconnection,” ISS’97, 1997.

    [11] N. Yamanaka, S. Yasukawa, E. Oki, and T. Kurimoto, “OP-TIMA: Tb/s ATM switching system architecture,” IEEEATM’97 Workshop, pp.691–696, 1996.

    [12] K. Shiomoto and S. Chaki, “Adaptive connection admissioncontrol using real-time traffic measurement in ATM net-works,” IEICE Trans. Commun., vol.E78-B, no.4, pp.458–464, April 1995.

    [13] E. Oki, N. Yamanaka, S. Yasukawa, R. Kawano, andK. Okazaki, “Merging advanced electronic and opticalWDM technologies for 640-Gbit/s ATM switching system,”Proc. IEEE ICC’99, pp.806–811, June 1999.

    [14] N. Yamanaka, R. Kawano, E. Oki, S. Yasukawa, andK. Okazaki, “OPTIMA: 640Gb/s high-speed ATM switch-ing system based on 0.25µm CMOS, MCM-C, and opticalWDM interconnection,” Proc. IEEE 49th ECTC, pp.26–32,1999.

    [15] J. Turner and N. Yamanaka, “Architectural choice in largescale ATM switches,” IEICE Trans. Commun., vol.E81-B,no.2, pp.120–137, Feb. 1998.

    [16] E. Oki, N. Yamanaka, and Y. Ohtomo, “A 10Gb/s(1.25Gb/s × 8) 4× 2 CMOS/SIMOX ATM switch,” IEEEISSCC’99, TA 9.6, Feb. 1999.

    [17] E. Oki and N. Yamanaka, “A high-speed ATM switch basedon scalable distributed arbitration,” IEICE Trans. Com-mun., vol.E80-B, no.9, pp.1372–1376, Sept. 1997.

    [18] Y. Ohtomo, S. Yasuda, M. Nogawa, J. Inoue, K.Yamakoshi, H. Sawada, M. Ino, S. Hino, Y. Sato, Y. Takei,T. Watanabe, and K. Tekeya, “A 40Gb/s 8×8 ATM switchLSI using 0.25µm CMOS/SIMOX,” Proc. ISSCC97, FA9.3, pp.154–155, 1997.

    [19] Y. Ohtomo, M. Nogawa, and M. Ino, “A 2.6-Gbps/pinSIMOX-CMOS low-voltage-swing interface circuit,” IEICE

    Trans. Electron., vol.E79-C, no.4, pp.524–529, April 1996.[20] E. Oki and N. Yamanaka, “High-speed tandem-crosspoint

    ATM switch architecture with input and output buffers,”IEICE Trans. Commun., vol.E81-B, no.2, pp.215–223, Feb.1998.

    [21] T. Sakai, S. Konaka, Y. Kobayashi, M. Suzuki, andY. Kawai, “Gigabit logic bipolar technology: Advanced su-per self-aligned process technology,” Electron. Lett., vol.19,pp.283–284, 1983.

    [22] K. Okazaki, N. Sugiura, A. Harada, N. Yamanaka, andE. Oki, “80-Gbit/s MCM-C technologies for high-speedATM switching system,” 1999 Int. Conf. on High DensityPackaging and MCMs, pp.284–288, April 1999.

    [23] S. Sasaki, T. Kishimoto, K. Genda, K. Endo, andK. Kaizu, “Multichip module technologies for high-speedATM switching systems,” ’94 MCM Conf., pp.130–135,1994.

    [24] K. Okazaki, N. Yamanaka, and N. Sugiura, “A high-speed,high-power stack connector for switching MCMs,” IMAPSMCM Applications Workshop, session 1–4, 1998.

    [25] S. Yasukawa, N. Yamanaka, R. Kawano, H. Takeuchi,and S. Kuwano, “Tb/s WDM interconnection system withpocket-size 10Gb/s transmitter/receiver module and AWGrouter,” ECOC’98, pp.575–576, 1998.

    [26] R. Kawano, N. Yamanaka, S. Yasukawa, and K. Okazaki,“10-Gb/s palm-size optical transmission/receiver set withnovel cooling structure for WDM sub-Tb/s ATM switchsystem,” Int. Symposium on Microelectronics, pp.839–843,1998.

    [27] H. Takahashi, K. Oda, and Y. Inoue, “Transmission char-acteristics of arrayed waveguide N × N wavelength multi-plexer,” IEEE J. Lightwave Technol., vol.13, no.3, pp.445–447, 1995.

    [28] H. Adachi, “Structure of micro-heat pipe,” US Patentk,no.5, 219–220, June 15, 1993.

  • 1496IEICE TRANS. COMMUN., VOL.E83–B, NO.7 JULY 2000

    Naoaki Yamanaka was born inSendai City, Miyagi Prefecture, Japan in1958. He graduated from Keio Univer-sity, Tokyo, Japan, where he receivedB.E., M.E. and Ph.D. degrees in engineer-ing in 1981, 1983 and 1991, respectively.In 1983 he joined Nippon Telegraph andTelephone Corporation’s (NTT’s) Com-munication Switching Laboratories, To-kyo, Japan, where he researched anddeveloped high-speed switching systems

    and high-speed switching technologies, such as ultra-high-speedswitching LSI chips/devices, packaging techniques, and intercon-nection techniques, for broadband ISDN services. Since 1989he has been developing broadband ISDN things based on ATMtechniques. He is now researching ATM-based broadband ISDNarchitectures and is engaged in traffic management and perfor-mance analysis of ATM networks. He is currently a senior re-search engineer, supervisor, research group leader in the Broad-band Network System Laboratory at NTT. Dr. Yamanaka re-ceived the Best of Conference Award at the 40th, 44th and 48thIEEE Electronic Components and Technology Conferences, theTELECOM System Technology Prize from the Telecommunica-tions Advancement Foundation, the IEEE CPMT TransactionsPart B : Best Transactions Paper Award, and the Excellent PaperAward from the IEICE in 1990, 1994, 1999, 1994, 1996, and 1999,respectively. Dr. Yamanaka is the Broadband Network Area Ed-itor of the IEEE Communication Surveys, Editor of the IEICETransactions, and the IEICE Communication Society Interna-tional Affairs Director, as well as the Chair of the Asia PacificBoard Technical Committee of the IEEE Communications Soci-ety. Dr. Yamanaka is the IEEE Fellow.

    Eiji Oki received B.E. and M.E. de-grees in instrumentation engineering anda Ph.D. degree in electrical engineeringfrom Keio University, Yokohama, Japan,in 1991, 1993, and 1999, respectively. In1993, he joined Nippon Telegraph andTelephone Corporation’s (NTT’s) Com-munication Switching Laboratories, To-kyo Japan. He has been researchingmultimedia- communication network ar-chitectures based on ATM techniques and

    traffic-control methods for ATM networks. He is currently de-veloping high-speed ATM switching systems in NTT NetworkService Systems Laboratories as a Research Engineer. Dr. Okireceived the Switching System Research Award and the ExcellentPaper Award from the IEICE in 1998 and 1999, respectively. Heis a member of the IEEE.

    Seisho Yasukawa received the B.E.and M.E. degrees from the University ofTokyo, Tokyo, Japan, in 1993 and 1995,respectively. In 1995, he joined NipponTelegraph and Telephone Corporation’s(NTT’s) Network Service Systems Labo-ratories, Tokyo, Japan. He is currentlyresearching a high-speed ATM switchingsystem and optical interconnection sys-tem for future multimedia communicationnetwork based on ATM techniques.

    Ryusuke Kawano was born in Oita,Japan, on April 2, 1964. He receivedthe B.E. and M.E. degrees from Univer-sity of Osaka Prefecture in 1987 and 1989.In 1989 he joined Nippon Telegraph andTelephone Corporation (NTT) and beganresearching and developing process tech-nology for high-speed Si-bipolar devices.Since 1992 he has been engaged in re-searching and developing high-speed inte-grated circuits using Si bipolar transistors

    and GaAs MESFETs at the NTT LSI Laboratories, Kanagawa,Japan. After moved to NTT Network Service Systems Labora-tories, Tokyo, Japan, his current research interests includes verylarge capacity ATM switching hardware such as high-speed logic,optical interconnection and cooling. Mr. Kawano is a member ofIEEE.

    Katsuhiko Okazaki was born in To-kyo, Japan in 1969. He received the B.E.and M.E. degrees in applied physics andchemistry from The University of Electro-Communications, Tokyo, Japan, in 1993and 1995, respectively. In 1995, he joinedNippon Telegraph and Telephone Cor-poration’s (NTT’s) Network Service Sys-tems Laboratories, Tokyo, Japan. He hasbeen researching high-speed electrical in-terconnections in a racksystem and high-

    density MCM packaging technologies for high-speed ATM switch-ing systems. Mr. Okazaki is a member of the IMAPS.