an installation of interactive · kiosks placed...

by O. Omojola E. R. Post M. D. Hancher Y. Maguire R. Pappu B. Schoner P. R. Russo R. N. Gershenfeld Fletcher We report on a project that explored emerging technologies for intuitive and unobtrusive information interfaces in a compelling setting. An installation at the Museum of Modern Art, New York, was part of a public exhibit and included an interactive table that presented information associated with the exhibit to the gallery visitors without visible conventional computing elements. The enabling devices included noncontact sensing of low-cost tags in physical icons, electrostatic detection of hand location in three dimensions, and sensor fusion through lightweight Internet Protocol access. A dvances in computing technology and the ad- vent of the Internet have brought about dra- matic increases in the amounts of information avail- able to people. However, the interfaces used to handle this information have not changed much with time: the mouse, the keyboard, and the computer monitor, connected to a box containing the proces- sor and its peripherals. The mouse and the keyboard are nonintuitive interfaces. Touch-typing, as an ex- ample, is a skill that takes significant experience or training to acquire, because the layout of the stan- dard computer keyboard groups letters in a nonin- tuitive fashion. Also, the boxes for the processor and the monitors have, over time, gotten smaller, qui- eter, and cheaper, but are essentially the same, typ- ically unsightly, and occupy visible space. More im- portantly perhaps, they cut users off from the rest of the world: sitting in front of a computer often means ignoring the surrounding environment. Much technology has been developed in disparate pieces to enable the vision of embedding computing power into everyday objects such as clothing and fur- niture. There have been some projects such as the Brain Opera 1,2 that cross the boundaries of individ- ual, small-scale demonstrations and have explored the issues of interconnecting these disparate pieces, but these have been few and far between. Based on common interests in the potential appli- cation of these technological goals, the Museum of Modern Art (MoMA), New York, and the MIT Me- dia Laboratory agreed to a collaboration driven by the desire to increasingly use technology in their ex- hibits without making aesthetic concessions. Their main motivation was to use smart spaces 3 in museum exhibits, without the obvious elements of the asso- ciated computing. They offered a very useful error metric that is lacking in a laboratory: a very high aes- thetic sensibility. In many Media Lab demonstra- tions, the visual appearance is a distant second to the functionality of the technology. The museum was unwilling to compromise the appearance of the ex- hibit spaces as a result of any such limitations. The collaboration centered on the Un-Private House, an exhibition of architectural projects that offered interesting blends of public and private spaces. A significant amount of information about the projects was in electronic form, and, tradition- ally, this information is presented through computer rCopyright 2000 by International Business Machines Corpora- tion. Copying in printed form for private use is permitted with- out payment of royalty provided that (1) each reproduction is done without alteration and (2) the Journal reference and IBM copy- right notice are included on the first page. The title and abstract, but no other portions, of this paper may be copied or distributed royalty free without further permission by computer-based and other information-service systems. Permission to republish any other portion of this paper must be obtained from the Editor. IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 0018-8670/00/$5.00 © 2000 IBM OMOJOLA ET AL. 861 An installation of interactive furniture

Upload: lyhanh

Post on 15-May-2019




0 download


by O. OmojolaE. R. Post M. D. HancherY. MaguireR. Pappu

B. SchonerP. R. RussoR. N. Gershenfeld


We report on a project that explored emergingtechnologies for intuitive and unobtrusiveinformation interfaces in a compelling setting. Aninstallation at the Museum of Modern Art, NewYork, was part of a public exhibit and includedan interactive table that presented informationassociated with the exhibit to the gallery visitorswithout visible conventional computing elements.The enabling devices included noncontactsensing of low-cost tags in physical icons,electrostatic detection of hand location in threedimensions, and sensor fusion throughlightweight Internet Protocol access.

Advances in computing technology and the ad-vent of the Internet have brought about dra-

matic increases in the amounts of information avail-able to people. However, the interfaces used tohandle this information have not changed much withtime: the mouse, the keyboard, and the computermonitor, connected to a box containing the proces-sor and its peripherals. The mouse and the keyboardare nonintuitive interfaces. Touch-typing, as an ex-ample, is a skill that takes significant experience ortraining to acquire, because the layout of the stan-dard computer keyboard groups letters in a nonin-tuitive fashion. Also, the boxes for the processor andthe monitors have, over time, gotten smaller, qui-eter, and cheaper, but are essentially the same, typ-ically unsightly, and occupy visible space. More im-portantly perhaps, they cut users off from the restof the world: sitting in front of a computer oftenmeans ignoring the surrounding environment.

Much technology has been developed in disparatepieces to enable the vision of embedding computing

power into everyday objects such as clothing and fur-niture. There have been some projects such as theBrain Opera1,2 that cross the boundaries of individ-ual, small-scale demonstrations and have exploredthe issues of interconnecting these disparate pieces,but these have been few and far between.

Based on common interests in the potential appli-cation of these technological goals, the Museum ofModern Art (MoMA), New York, and the MIT Me-dia Laboratory agreed to a collaboration driven bythe desire to increasingly use technology in their ex-hibits without making aesthetic concessions. Theirmain motivation was to use smart spaces3 in museumexhibits, without the obvious elements of the asso-ciated computing. They offered a very useful errormetric that is lacking in a laboratory: a very high aes-thetic sensibility. In many Media Lab demonstra-tions, the visual appearance is a distant second tothe functionality of the technology. The museum wasunwilling to compromise the appearance of the ex-hibit spaces as a result of any such limitations.

The collaboration centered on the Un-PrivateHouse, an exhibition of architectural projects thatoffered interesting blends of public and privatespaces. A significant amount of information aboutthe projects was in electronic form, and, tradition-ally, this information is presented through computer

rCopyright 2000 by International Business Machines Corpora-tion. Copying in printed form for private use is permitted with-out payment of royalty provided that (1) each reproduction is donewithout alteration and (2) the Journal reference and IBM copy-right notice are included on the first page. The title and abstract,but no other portions, of this paper may be copied or distributedroyalty free without further permission by computer-based andother information-service systems. Permission to republish anyother portion of this paper must be obtained from the Editor.

IBM SYSTEMS JOURNAL, VOL 39, NOS 3&4, 2000 0018-8670/00/$5.00 © 2000 IBM OMOJOLA ET AL. 861

An installationof interactive furniture

kiosks placed away from the exhibit. The curator,Terence Riley, and the exhibition’s production man-ager, Andrew Davies, were eager to have this infor-mation presented to the exhibit’s visitors within therich context of the exhibit itself. They wanted to sparkmore social interaction and discussion, and wantedthe space to react to the presence of the visitors.

These ideas had elements in common with much ofthe work done within the Media Lab on smartspaces3 and provided a very interesting environmentto apply much of our technology. From the begin-ning there were a number of significant design ob-jectives. There obviously could not be any exposedtechnology, since the clash with the appearance andlayout of the rest of the exhibit would detract fromthe museum experience. The museum wanted thetechnology to be used in ways that enhanced con-ventional functions within the home (as an exam-ple, the interactive welcome mat, described later, isnot so much a new idea, but an enhancement of thetraditional welcome mat). The installation neededto be reliable enough to handle the crowds of mu-seum visitors without permanent supervision. Finally,the information to be displayed was in a range offormats that included text, images, and video, andthe chosen interface had to provide display andbrowse access to all of these in a straightforward fash-ion.

During one of the visits Terence Riley paid to theMedia Lab, he was exposed to the work of HiroshiIshii on tangible interfaces4 and was led to an in-terest in similarly tactile interfaces. Given the de-sire for a simple, intuitive interface, the first propos-als planned to use a tagged object as a physical iconto start the display of information.

A dining table offered the most potential as the phys-ical surface on which the information would be dis-played. It also had the added benefit of both sym-bolizing, and providing a good framework for, thekind of social interaction that was desired.

Initially, a vision-based system similar to that de-signed by Flavia Sparacino and Kent Larson for theUnbuilt Ruins exhibit at the MIT Compton Gallery5

would track the position of various objects on thetable. The user could place these at certain pointson the table to interact with the information. Thisidea was abandoned because of concerns about acluttered interface that would be confusing or requireextensive supervision to prevent people from walk-ing away with the objects.

Another of the initial ideas had the welcome mattrack the motion of people across it and respond tothis in some way. This was first tried using a mat withelectric-field-sensing electrodes and a projected im-age that followed the path of the person walkingacross it. Issues of robustness for this trackingmethod caused this idea to be dropped as well.

Of some interest is the fact that in the final designthe tracking methods used were the same, but usedfor different applications: the interactive table inter-face used electric-field sensing, whereas the welcomemat installation used a vision-based system.

There was a significant amount of information to bedisplayed, and there were both human resource lim-itations and a lack of interest in content development,editing, and formatting for the project. MoMA andNearLife, Inc., a Boston-based interactive entertain-ment company, handled those functions as well asthe purchase, setup, and maintenance of the com-puters that drove the various parts of the installa-tion.


The gallery arrangement for the Un-Private Housewas modeled after a house, with sections of the gal-lery and the associated furniture imitating the var-ious rooms and functions present in a house. Theexhibit had 26 architectural projects on display, andthere were models and other display pieces for eachhouse distributed throughout the gallery on the ta-bles and other pieces of furniture. Figure 1 showsthe layout of the gallery, with the entrance to thegallery in the upper right corner and the dining ta-ble as a round gray and white circle in the lower left.

The interactive installation consisted of two parts.The first part was the interactive welcome mat sta-tioned at the gallery entrance, as shown in Figure 2.An image of a traditional welcome mat appeared onthe floor outside the entrance, and the image movedin response to the motion of the visitors across it.

The main part of the installation was the interactivetable, built around the gallery’s dining table. The ta-ble was an 8-foot-diameter, five-legged table with asolid-white, 3/4-inch-thick Corian** surface thatseated eight people. Each of the eight place settingsfeatured a display projected from above that was 18inches wide by 14 inches high on the tabletop. In thecenter of the table was a 5-foot-diameter lazy Su-san, also made of solid white Corian, with recessed


holes on its outer edge. Placed in these indentationswere a set of 26 coasters (one in each indentation)with an image of a different architectural project onthe face of each coaster. The outer edge of the lazySusan was illuminated with a number of fiber opticlight sources from the drop ceiling above (which alsoserved as the projector housing), and there was a top-projected display above it.

The interface presented by an inactive place settingis shown in Figure 3, with the blue circle in the up-per right corner displaying the words “Place coasterhere to begin.” A pictorial animation of picking acoaster and placing it on the active spot was peri-odically displayed in the center of each place setting.

After the user chooses a specific coaster from thelazy Susan and places it over the blue circle, the dis-play changes to show a floor plan of the project intraditional blue print. Highlighted on the display area number of “hot spots,” places within the specifichouse with information that may be of interest tothe user. When a hot spot is touched the associatedinformation is displayed in the lower left corner, asshown in Figure 4. This information included text,images, and animations that showed interesting partsof the house, comments from the museum curator,the house’s architect, and the client, as well as in-formation about the technology used to build the ta-ble itself.

Depending on the content, an extra hot spot wouldappear that allowed a user who wished to share acertain image with the other users at the table to send

it to the center. When the user touched the “send-to-center” hot spot, the associated image was pro-jected onto the center of the lazy Susan, facing thatuser. Another user at the table interested in lookingat the image could rotate the lazy Susan, which wouldcause the image to also rotate. Comments about theexhibit entered on the museum’s Web site by the pub-lic were displayed in the center of the table whennot in use by any of the place settings. The lazy Su-san with a projected image is shown in Figure 5.

The remainder of this paper is organized as follows.The next section describes the welcome mat instal-lation and its use of computer vision technology.Then, the physical icons used in the dining table ex-hibit are described and the use of noncontact sens-ing technology. The following section focuses on the

Figure 1 The Un-Private House gallery floor plan Figure 2 Welcome mat

Figure 3 Photograph of default place setting


gestural interface and the use of electric-field sens-ing for enabling the use of the human hand as a point-ing device. Next, the lazy Susan installation isdescribed including the mechanism, based onquadrature encoding, for sensing its speed and di-rection of rotation. A section follows describing thelocal area network and its role in the communica-tion infrastructure for the installation. The follow-ing section on system software describes the browser-based client software for displaying the informationon the various exhibits. A brief description of thedisplay hardware setup is given in the next section.The final section summarizes the lessons learnedfrom this project.

Welcome mat

The interactive welcome mat was included in the in-stallation to give visitors a hint of what they wereabout to encounter inside the gallery. It was installed

at the entrance to the gallery and consisted of an im-age projected from the ceiling that responded to themotion of the visitors as they walked across it. A sim-ilar installation is reported in Reference 6.

As noted earlier, the original design called for elec-tric-field sensing as the core technology supportingthe welcome mat application, whereas computer vi-sion was used as the gesture-sensing technology forthe dining table. However, further reflection led usto implement the gesture detection using electric-field sensing, and to use the computer vision for thewelcome mat.

The schematic drawing of the installation is shownin Figure 6. A video camera and a projector werearranged in an approximately coaxial configurationdirectly above the projected area. The camera de-tects changes in the visual field, changes caused by

Figure 4 Photograph of activated place setting

Shorthand House Francois de Menil, Architect

Shorthand FoyerThe open plan of the Shorthand Houseallows one room to flow into another.However, pivoting doors and sliding panelscan transform the space, revealing a workarea that can be closed off as a privatestudy.

With its sliding and pivoting partitions,the Shorthand House reflects the wordplay between the French word for office,cabinet, and the English sense of thesame word, which refers to a piece offurniture with sliding drawers andpivoting doors. In the architect’s words,“What defines one space from anotherare things that move.”

Send tocenterof table



Living Room

Play Animation



Placecoasterhere tobegin






visitors moving across the carpet, and presents thesechanges to a simple physical model.

The physical model behind the interactive image isbased on an MpN lattice of unit masses, all locatedabove a reference plane. Each mass is connected tothe reference plane by a spring with a spring con-stant K and a damping constant D. The input imagefrom the projector is averaged down to a resolutionof MpN.

As a visitor moves across the area of the mat, a massproportional to the difference between the imagesof two adjacent frames is created in a second planelocated above the lattice. This additional mass grav-itationally attracts the unit masses and causes themto move toward regions of activity in the lattice. Fig-ure 7 shows a representative image of the lattice ofmasses. The image activity causing the distortion isin the upper right corner, and the red dots are theindividual masses.

The equation representing the dynamics of the phys-ical model is:

ma 5 Fg 2 Fs 2 Fd

Here the mass m is unity, a is the acceleration ex-perienced by the mass, Fg is the force due to gravity,Fs is the restoring force, and Fd is the force due todamping. Furthermore,

Fg 5 2GmR r

where G is a gravitational constant and R is the dis-tance of the mass from its reference location. Theexponent r controls the interaction distance and thusthe range that the masses move. Short-range inter-

Figure 6 Schematic of the welcome mat installation



Figure 5 Photograph of lazy Susan with projected image


action requires a higher value of r than a longer rangeone.

Fs 5 2Kx

where K is the spring constant, x is the distance thespring is extended, and

Fd 5 2Dv

where D is the damping constant and v is the ve-locity of the mass.


ma 5 2GmR r 1 K~ x 2 xr! 1 Dv

where xr is the reference point for the system (cho-sen to be 0 for the ensuing derivation). This equa-tion may be solved numerically using a method ofdifferences to yield the new position ( x t11) of eachmass in terms of its current position ( x t) and its po-sition 1 time period earlier ( x t21) and this is givenby

xt11 5 S DmDt


m~Dt! 2 1 2Dxt 1 S DmDt

2 1Dxt21

2GR rS 1


where Dt is the time difference between each posi-tion (which we arbitrarily chose to be unity). An iden-tical derivation can be performed for the change inposition in the y direction.

The final image of a welcome mat is texture mappedonto the distorted lattice of unit masses and renderedusing OpenGL**. Figure 8 depicts the data pipelineof the welcome mat system.

Physical icons

The coasters serving as physical icons are depictedin Figure 9. The person holding the coaster, and onlythat person, has access to the information on the as-sociated exhibit. Each coaster is approximately 3.5inches in diameter, 0.75 inches thick, and is madeof a hollowed-out Corian base and a clear acrylic cap.An image of the associated exhibit is attached to thebottom of the cap, as shown in Figure 9.

One component of the tagging system used to iden-tify the coasters and retrieve the associated data isa radio frequency identification device (RFID) tag em-bedded in the coaster. The tag, shown in the fore-ground of Figure 9, is a simple LC (inductor-capac-itor circuit) resonator built from a 3-inch diameterprinted copper coil and a ceramic chip capacitor sol-dered onto the coil. Using the same coil size with adifferent capacitance for each coaster results in res-onant frequencies ranging between 5.4 MHz and 13.2MHz, in increments of 300 kHz. When comparedwith conventional silicon RFID tags, our tags havelower cost and are easier to replace. The 300-kHzvalue chosen for the resonant frequency separationtakes into account the manufacturing tolerances ofthe capacitors and the safety margins desired.

Each place setting at the table is equipped with atag reader as shown in Figure 10. The tag reader,whose design is based on a design developed at theMedia Lab,7 measures 6.5 inches wide by 5 incheslong, and is located underneath the upper right cor-ner of the display area at each place setting. Thereader is mounted above the tauFish electrode ar-ray in order to minimize signal losses through thecopper electrodes. (The tauFish array is describedin the next section.)

The tag reader operates in a frequency range be-tween 5 MHz and 40 MHz, because tags operatingin this range are relatively cheap, and the transmitantenna can be implemented as a single-turn coiletched onto the printed circuit board. The tag reader

Figure 7 Image of the welcome mat showing imageactivity and the unit mass lattice


is powered by a 9-volt power supply. Access to themeasurement data is through an RS-232 serial port.

The functional block diagram of the tag reader isincluded in Figure 11. The design is based on aPIC16C76 microcontroller. The frequency is generatedby a single direct digital synthesis (DDS) device, ableto generate any frequency from DC up to 60 MHzusing a lookup table and a digital-to-analog con-verter. The output of the DDS is passed through afour-pole low-pass filter to remove the higher-orderalias frequency harmonics, and the filtered signal isthen amplified by a fast operational amplifier.

The presence of a resonator is detected by a 50 Vdirectional coupler designed to measure antennaloading. In this configuration, the untuned single-turn antenna presents an unmatched load and thereflected power is channeled through the directionalcoupler output. When a resonant structure is placedwithin range of the antenna, however, the imped-ance of the resonator dominates the response. More-over, when the frequency signal equals the resonant

frequency a dip in the reflected power results. Thismethod of detection eliminates the need for a broad-band tuned antenna, and because the antenna isuntuned, the tag reader is immune to spuriousresponses caused by stray reactance in the environ-ment.

The directional coupler’s output is rectified by a di-ode detector and the signal is sampled by the analog-to-digital converter built in the microcontroller. Themicrocontroller scans the output range of the DDSdevice, from 5.4 MHz to 13.3 MHz in increments of100 kHz, and records an 8-bit sample at each fre-quency. The sampled data are transmitted throughthe serial port as 80-byte data packets. A samplingcycle of 105 milliseconds was found adequate.

The serial port output is connected directly to a Fil-ament card (described later in the subsection on net-

Figure 8 Functional block diagram of the welcome mat system




Figure 9 Photograph of the coasters with an exposedresonator in the foreground

Figure 10 Photograph of the tag reader








Figure 11 Functional block diagram of the dining table installation






































work infrastructure), which broadcasts the data onthe local Ethernet network. Remote computers con-nected to the network take the data off the networkand run the data through a tag reader interpreterprogram, running as a background process.

The interpreter, whose graphical user interface isshown in Figure 12, is written in Java** and takesthe following input parameters: the TCP/IP (Trans-mission Control Protocol/Internet Protocol) portnumber the Filament transmits to the port numberon which the Filament receives, the network addressof the machine running the specific place setting’sdisplay system software, and the port number onwhich the system software listens for updates. Themapping between tags and exhibits is specified in afile accessible to the interpreter.

The interpreter receives the broadcast data originat-ing at the tag reader and filters the data using a four-tap wavelet filter. The filter is needed to compen-sate for certain hardware limitations in the tagreader, remove environmental noise, and eliminatethe DC bias added by the analog-to-digital converter.The output of the filter is a 76-point sequence scaledbetween 0 and 15. The centroid of this sequence iscalculated and the resultant number is checked

against the contents of the database file. The data-base file contains pairs of numbers: the first valuein each pair is a tag number; the second value is theassociated centroid value. The closest match to thecalculated centroid is found, and this identifies theinformation content to be called up (or switched off).A simple state machine allows the interpreter to trackonly updates (i.e., changes in the tag reader’s datathat indicate either introduction or removal of a tagfrom the area of the antenna). The interpreter, upondetecting an event, sends a data packet to the ap-propriate place setting’s display system software,which takes the appropriate action (either loadingor unloading a particular project’s information).

Gestural interface

To maintain the illusion of direct interaction witha projected image, we chose a form of electric-fieldsensing (EFS)8 to determine the position of the us-er’s hand over the image. The musical instrumentknown as the Theremin9 is a classic example of anEFS device. Invented in 1917 by Lev. S. Theremin,this eponymous instrument senses small variationsin the capacitance of two antennae induced by theuser’s body and converts these measurements to anaudible form.

Figure 12 Tag reader interpreter and data visualization


In this section, we describe the instrumentation andalgorithm used to detect the user’s interaction withan image projected onto a surface. The instrumen-tation is based on the tauFish capacitive sensor (de-scribed below). Capacitance is measured at pointsuniformly covering the image region. The measuredcapacitance vector is then compared to vectors cal-culated using a forward electrostatic model, whichwe describe. The calculated vectors are the chargeor capacitance distributions expected when the us-er’s finger is placed at various points on the imagesurface. These finger locations are referred to as “hotspots.” If sufficient total capacitance is observed overthe entire image, then the calculated vector mostclosely matching the observed vector indicates theuser’s interaction with a particular hot spot.

Our early work with EFS10,11 employed synchronousdetection to measure the perturbation of weak, qua-si-static (50–500 kHz) electric fields. The evolutionof the EFS hardware is shown in Figure 13. The firstprototype transmitted on two electrodes, receivedon one electrode and comprised several boards’ worthof hand-wired analog circuitry tucked into a 19-inchrack-mount cabinet. The first production-level sys-tem was dubbed the Fish (named after the aquaticanimals that use electric fields to sense their envi-

ronments, including the gymnotiforms and mormyri-formes). It consisted of two transmitters, four pro-grammable-gain receivers, and a microcontrollerequipped with MIDI (Musical Instrument Digital In-terface) and RS-232 serial interfaces used for dataacquisition. The second system (SmartFish) incor-porated two transmitters, eight programmable-gainreceivers, a fast 8-channel analog/digital (A/D) con-verter, and a 32-bit, 20-MHz DSP used to performsynchronous detection in software. This complex de-sign was ultimately abandoned because of its sen-sitivity to weak fringe fields and, more importantly,to its own digital noise.

The most recent design, named LazyFish and shownas the smallest board in Figure 13, represents a shiftin our design of synchronous EFS instrumentation.Instead of using a powerful DSP and several configu-rable analog input stages, the LazyFish incorporatesa simple microcontroller, four transmitters, twofixed-gain receivers, and a synchronous undersam-pling algorithm implemented in firmware, a designwhich outperforms the earliest Fish boards in speed,sensitivity, and simplicity.

In the quest for an even simpler EFS sensor we es-chewed synchronous detection, choosing instead toimplement time-domain measurement of capacitiveloading. The resulting module, known as the tauFishand shown in Figure 14, is even smaller than its pre-decessors, and controls four channels in a loading-mode measurement.

The 5 3 6 array of tauFish modules positioned un-derneath each place setting, as shown in Figure 15,measures 18 inches by 16 inches. Each unit sensesthe capacitance of four 1.5-inch-by-1.5-inch elec-trodes. The units are connected by a multidrop se-rial bus attached to a microcontroller located on thearray board. The microcontroller sequences the sens-ing and readout operations of the tauFish units andproduces a sensor data stream. The entire arrayboard outputs data at 115 Kb/s, and full frames ofdata are generated at a rate of about 10 Hz (wherea full frame of data is a 24-bit value for the capac-itance measurement for each one of 120 electrodes).

The physical model underlying the EFS applicationis a forward model12 of induced charge on the elec-trodes. Consider the case of a point charge abovean infinitely conducting plane. If a charge q is lo-cated at ( x, y, z), then the method of images canbe used to find the electric field at the surface of theconducting plane. Since the plane is considered a per-

Figure 13 Four generations of electric-field-sensinginstrumentation


fect conductor, there will be no transverse field, andthe perpendicular electric field will be determinedby the presence of the charge above the plane andits inverse reflection below the plane. The electricfield at a position (m x , m y , 0) will be

Ez 5qz

2p«r 3


2p«@~ x 2 mx!2 1 ~ y 2 my!

2 1 z 2# 3/ 2

where « is the electric permittivity.

Integrating this field over an infinitesimal Gaussian“pillbox” at the surface gives the surface charge den-sity s:

E rdV 5 E ¹ z DW dV

Q 5 E «E z dAW

5 «Ez A


5 «Ez

s 5qz

2p@~ x 2 mx!2 1 ~ y 2 my!

2 1 z 2# 3/ 2

Now assume that (infinitesimal) square electrodestile the plane. The induced surface charge for eachelectrode will be

Q 5 Ex0

x01dx Ey0


s~ x, y!dxd y


2parctan F ~ x0 2 mx!~ y0 2 my!

zÎ~ x0 2 mx!2 1 ~ y0 2 my!

2 1 z 2G

Figure 14 Top and bottom views of four-channel tauFish

Figure 15 tauFish array with 30 tauFish (120 electrodes)


Note that the charge q is only a leading term, whilethe surface charge distribution is shaped strongly byz; when z is small, the distribution is narrow, and asz increases the charge distribution broadens. The to-tal charge varies linearly as the capacitance (Q 5CV) and is proportional to the capacitive load dueto charge induction.

The tauFish measurement strategy is deceptivelysimple. One pin of a microcontroller is alternatelycharged and discharged by using a second pin topulse a voltage source through a known resistanceR. The first pin is also attached to an electrode thatserves as the capacitance C under test. The CMOS(complementary metal-oxide semiconductor) inputat the pin is configured to have hysteretic thresholdsat 0.3VDD and 0.7VDD (where VDD is the positive sup-ply voltage of the microprocessor). The electrodecharging cycle covers the range [0 . . . 0.7VDD],while the discharging cycle covers the range[VDD . . . 0.3VDD]. The capacitor voltage evolves intime as

V 5 Ae 2t/t

where t is the charging time constant t 5 RC. It isthe measurement of this time constant that gives thetauFish its name. The charging and dischargingthresholds are as described above, so the measure-ment time is found from

1 2 e 2t/t 5 0.7

to be t ' 1.21t

In this implementation, charge is transferred dur-ing 40 nanosecond intervals (two instructions at anexecution rate of 50 MHz) through a 2MV resistance.Since the charging time is linear in C, we find thatone current pulse corresponds to charging a capac-itance of

C 5dtR

5~40 3 10 29s!

~2 3 10 6V!5 2 3 10 215F

or 20 femto-Farads. Because this measurement isdone in the time domain with minimal analog cir-cuitry (i.e., one additional resistor), its precision de-pends on the stability of the clock as well as the mi-croprocessor’s timing jitter. The measurement canbe made arbitrarily precise by averaging several mea-surement cycles. Figure 16 shows the voltage on anelectrode as it is charged and discharged during ameasurement cycle. Finally, to minimize the first-order effects of 60-Hz noise that couples into themeasurement, the values of the charging and dis-charging cycles are summed.

While it is simple to evaluate the forward electro-static model going from a known set of charges to

Figure 16 Electrode voltage vs time during tauFish capacitance measurement cycle. The first plot is measuring theparasitics and the electrode self-capacitances. The second plot is measuring the parasitics and the electrode-hand capacitances.


the induced surface charge on the plane, the gen-eral case of the inverse problem is not so easilysolved. Instead, we use a priori knowledge of the tar-get activation regions or hot spots that represent hy-perlinks in the presented content, and use the for-ward model from above to predict an expectedinduced charge distribution for each hot spot. Eachpage of content specifies a list of hot spots by theirpositions on the image plane. When a page is pre-sented, the content browser transfers that page’s hotspot list to the hot spot interpreter, a separate pro-gram that receives tauFish array data and reportsactivation events back to the browser. The hot spotinterpreter can also be used to display the sensor datain real time, as shown in Figure 17. The intensity ofthe large red squares in these images correspondsto variations in capacitance, while the small purpleor green dots indicate hot spots (where green de-notes a triggered hot spot).

To determine hot spot activation, the observed sen-sor data are normalized and linearly projected ontothe charge distributions calculated for each hot spotby the forward model. This may be thought of as adistance metric that ignores the total charge. If thebest fit exceeds a certain threshold (or rather, theangle between the observed and predicted chargevectors is less than a certain value) then the hot spotis activated and an activation event is sent to thebrowser.

Lazy Susan

The sensing system in the lazy Susan (shown in sche-matic form in Figure 19) consists of three parts: asensor board, a reflective aperture ring, and an in-terface board. The sensor board, shown in Figure18, includes two Omron phototransistor/LED pairsand a pair of AD790 comparators. The aperture ring,also shown in Figure 18, is a perforated acrylic ringwith a reflective Mylar** coating, mounted on the

Figure 17 Display of tauFish array data. The first image is the array’s response to the index finger touching the tabletop.The second image is the array’s response to a palm touching the tabletop.

Figure 18 Lazy Susan aperture ring and sensor board


underside of the lazy Susan. The sensor board is sta-tionary and its spacing below the aperture ring canbe adjusted with pieces of card stock.

A schematic diagram of the sensing system opera-tion is shown in Figure 19. The sensing system op-erates on the principle of quadrature encoding, us-ing the pattern of alternating reflective strips andnonreflective apertures caused by the rotation of theaperture ring. The two phototransistors on the sen-sor board are offset in space by half the distance be-tween a reflective strip and a nonreflective aperture.

The interface board consists of a PIC16F84 micropro-cessor, a MAX202 serial line driver, and a voltage reg-ulator for the power supplied to the sensor board.The high/low outputs of the detectors are receivedas digital inputs by the interface board, and a simplestate-machine-driven program in the microcontrollerdetermines the rate and the direction of rotation ofthe lazy Susan.

The output of the microcontroller is a string of “R”and “L” ASCII characters. An “R” is generated whenthe lazy Susan rotates through an angular distanceof one aperture unit to the right, and an “L” whenthe rotation is to the left. These data are transmit-ted to a Filament card and onto the network. A com-puter in the back room receives the data packets androtates the coordinates of the center image accord-ingly.

In the course of testing the setup, it was discoveredthat the redraw time for the center image was slowenough for it to visibly lag behind the rotation of the

lazy Susan, catching up only after the lazy Susan hadslowed significantly. Rather than trying to redisplaythe image for every aperture unit of motion, the lo-cation of the lazy Susan at the end of a redraw wasidentified and used as target for the next redraw. Asa result, when the lazy Susan spun faster, the rota-tion of the image was closer to the actual motion ofthe lazy Susan and more pleasing to the eye.

Network infrastructure

The seventeen sensors used in the dining table in-stallation (eight tauFish arrays, eight tag readers, andthe lazy Susan sensor board) were connected to eachother and the controlling computers via an Ether-net data network. A single Ethernet cable broughtnetwork service to the table, and each sensor wasconnected to the network by means of a custom-made embedded network adapter, which we call theFilament. The Filament represents the minimumlevel of resources required to connect a device tothe network.

Despite the increasing ubiquity of data networks,most devices in the computing industry are still de-signed for use with a single computer. Unfortunately,this model meshes poorly with the mounting demandfor delocalized information access and distributedprocessing. The Filament was designed to addressthose needs. It consists primarily of an Ethernet con-troller (the CrystalLAN CS8900A) and a microcon-troller (the PIC16F876) that contains a minimal im-plementation of UDP/IP, the User Datagram Protocolover the Internet Protocol. When connected to anarbitrary device (via a standard serial port) the Fil-ament allows that device to communicate with anyother device or computer on the network.

This combination (Ethernet and IP) was chosen inlarge part because it has become the de facto stan-dard for building-level networking. Computers cancommunicate with Filament-enabled devices as eas-ily as they can communicate with other computerson the network. No unusual protocols or expensiveand cumbersome translation or gateway systemsneed to be used, as is the case with existing commer-cial offerings such as those of emWare. Furthermore,because this system communicates directly using IP,it is capable of easily interoperating with a wide rangeof networks and can benefit from the over 30 yearsof experience the networking industry has had in us-ing that protocol.

Figure 19 Schematic of lazy Susan installation





For all those reasons, IP has recently gained accep-tance as the appropriate protocol for embedded sys-tems, and a number of engineers and companies haveintroduced IP-compatible embedded products. Theseexisting devices fall broadly into two classes. Rep-resentatives of the first class simply use IP as a packet-data protocol on top of a traditional serial link, us-ing SLIP (Serial Line Interface Protocol) or PPP(Point-to-Point Protocol). However, a serial multi-plexing system is still needed to connect a large num-ber of devices. The second class of systems does useEthernet or another network technology as the phys-ical transport layer. Unfortunately, the existing de-vices tend to be bulky or overcomplicated, render-ing them inappropriate for many simple embeddedapplications. For this installation, what was neededwas an extremely simple Ethernet/IP implementa-tion that could be used to connect the various de-vices.

The particular device we used, the Filament F3r3,was designed to perform this function in an inexpen-sive, highly scalable manner. The device itself has acost of between $10 and $20 per unit in large quan-tities, but the F3r3 is really an eight-wire device (pow-er, ground, two serial pins, and four Ethernet pins)and so the limit of simplification for the F3r3 is asingle eight-pin chip capable of connecting any se-rial device directly to the network. The device isshown in Figure 20.

Using an Ethernet/IP network to connect the var-ious devices offered a wealth of advantages over moretraditional serial multiplexing systems. Cabling wastrivial, and in particular there were none of the dis-tance constraints often associated with serial mul-tiplexers. Also, by plugging a laptop computer intothe Ethernet hub in the table, network debuggingtools could monitor the flow of information in realtime and in great detail, without interfering with thenormal operation of any of the devices or the over-all system. The sensors, data processing programs,and display programs were also engineered to com-municate using UDP as a form of network-compat-ible interprocess communication. The end result wasperhaps the most important advantage of using anEthernet-based architecture: everything was com-pletely delocalized. Individual software subsystemscould be moved from one computer to another totake advantage of different debugging environmentsor to rule out computer-specific troubles and othersoftware interactions.

System software

In designing the interactive dining table we wantedto allow visitors seamless access to the multimediacontent within the exhibit environment. We wantedto create a space where people without computerskills could interact with the museum exhibit as flu-idly and easily as they would with a piece of furni-ture or a book.

In implementing the system software for the diningtable we had three major constraints. First, time tocompletion for this part of the project was a shortcouple of months. Second, since our group knew howto build hardware and software, but knew little aboutauthoring content, we needed to create a softwareenvironment that enabled the design company Near-life to focus on content creation. Third, our systemsoftware needed to interact with our sensing devicesover Ethernet in real time, by human perceptionstandards.

We chose to use an Internet browser as our softwareenvironment because of the built-in support for avariety of interactive content and the ability to rap-idly prototype using the HyperText Markup Lan-guage (HTML). There were two problems with usingHTML alone. First, straight HTML does not give de-signers enough control over layout and content. Sec-ond, and a much larger problem, Web browsers arebased on a client/server model. In this model abrowser makes an HTTP request to a server and ina time usually long enough to be perceived by theuser, the server responds. This information exchangehad too much overhead for our system to work inreal time. In addition, there was no built-in meansfor browsers to respond to User Datagram Protocol(UDP) packets. Fortunately, modern browsers inter-pret a number of other languages that enrich theexperience of our modern Internet. In the end we

Figure 20 Photograph of the Filament F3r3


were able to make images, videos, and text dynam-ically change in response to physical icons and touch.

Figure 11 shows the overall architecture. The firsttask was to enable the browser to receive contentfrom our local input devices on the Ethernet. Toachieve this, we wrote a small Java applet that com-municates with our devices using UDP. Modernbrowsers make it trivial to embed Java applets in aWeb page, but the events within an applet and theevents within the Web page itself are usually keptseparate. JavaScript** is a scripting language builtinto modern browsers that allowed us to make thisconnection. To do this, the Java applet was encap-sulated in a JavaBeans** component. This encap-sulation creates a standard interface (a bean) thatallows two software objects to communicate with-out the need to know the internals of the other ob-ject. This solved our real-time user input problem.This could be an interesting pathway for futureprojects to receive real-time sensor input to a Webpage without needing remote servers. This could alsobe a means for people with browsers to send infor-mation to Web servers via nonstandard sensor or in-put devices.

With user input worked out, we then turned to thecontent design aspect. Dynamic HTML (DHTML) andCascading Style Sheets (CSS) are the industry’s re-sponse to creating dynamic content with full controlof the layout. One can specify the location and vi-sual attributes of objects precisely on the screen. Theother important piece is that these attributes canchange—by using JavaScript, one can quickly scripta complex series of events given a user’s input. Forexample, when a user touches a hot spot, visual feed-back is given by brightening the color of hot spotsthrough a transparency effect of one image over an-other. We were able to display text, images, and vid-eos dynamically to delve into more information abouta building. We chose Microsoft Internet Explorer**5.0 running on Microsoft Windows 98** because itsvast, albeit nonstandard, set of multimedia effectsallowed us to create a rich environment that couldnot have been done with any other browser.

JavaScript, as a scripting language, suffers in perfor-mance relative to compiled code. It was paramountto the overall control of the software project, butthere were certain pieces such as animations, whichneeded to be precompiled to maximize performance.We turned to Macromedia Flash** to solve this prob-lem. Macromedia has made a set of intuitive tools

for creating quick, beautiful animations that areWeb-ready.

The software environment we created spanned manylevels: there is microcode on RISC (reduced instruc-tion-set computer) microcontrollers that communi-cate over Ethernet to a Java applet encapsulated asa JavaBeans component that communicates toJavaScript, which controls the HTML and DHTML ofa Web page that then renders text, images, and video.This seems complex, but the beauty of having somany levels of description was that the end productwas exactly as intended: seamless.

Projection display hardware

The ten projectors used in the welcome mat and thedining table installations were NEC MultiSync**MT1035 projectors, with light output of 1300 ANSI(American National Standards Institute) lumens. Forthe dining table, the projectors were mounted in acircular drop ceiling directly above the place settings,with an additional projector over the lazy Susan. Theywere all side-mounted with lenses facing downwards.Special lenses were used to focus the display areato the size of each place setting. The apertures inthe ceiling for the projectors were designed to cre-ate the impression of spotlights from the ceiling. Ca-bling (the VGA, or video graphics adapter, cables forthe display and the remote control cables for the pro-jectors) was run through the roof of the gallery tothe back room used to store the computers runningthe exhibit. Figure 21 shows a section of the tableand projector housing with the display paths for theprojectors in red.

As mentioned in the introduction to this paper, theforward projection display was chosen over flat-paneltouch screens or rear projection screens. In the kioskenvironment, where the panel is vertical and usersare viewing it head-on, traditional LCD technologymakes a great deal of sense. But to enable the userto drop things, cut things, and generally use the ta-ble as a table, the forward projection method is farsuperior to any conventional touch-screen technol-ogy. Indeed, an embedded flat panel touch screenwould require a thick protective layer of glass, whichwould make viewing the screen at an angle difficult.

As for the rear projection method, the projectionhardware would occupy space underneath the table.This might impair the look or the functionality ofthe table. The use of forward projection did not inany way change the feel of the table or the ability


to sit comfortably at it. There is a definite psycho-logical difference between an LCD screen that is in-destructible because it has a thick slab of glass in frontof it, and a table that is indestructible because it isa simple, solid piece of furniture. There was someconcern that display occlusion (which would occurwhen a user places a hand into the path of the pro-jected image onto a place setting) would have a neg-ative impact, but the area occluded by the hand wasnot visible to the user anyway. The projectors them-selves were well concealed, and the overall effect wascloser to a magical glowing panel than a traditionalprojected image.

Results and conclusions

The technology development cycle for this projectwas significantly different from any other internalproject of similar scale. One of the major observa-tions from the time spent setting up the installationin the gallery was the difficulty of doing significanthardware debugging on site (see Figure 22): the levelof developmental support that can be obtained inart museums is much less than in a conventional lab-oratory, and it took much longer than initially ex-pected to complete the installation. Integrating thearray boards and tag readers with the low-level net-working hardware and software proceeded withouta hitch. However, the development and integrationof the software on the back-room computers was ex-tremely difficult, despite the fact that the interpret-ers being used were performing relatively simple-to-describe computations. The lion’s share of thedevelopment time was spent debugging the devel-opment environments rather than the actual algo-rithms. Coming out of these experiences was a stron-ger realization of the need for a system that wouldallow limited arbitrary computation to be performedwithout the overhead of a full operating system, andfurther development of this concept is under way.

Once the development was complete, it was mostgratifying to observe visitors use the various exhib-its. The reactions to the welcome mat ranged fromthe mildly amused to the completely visceral, andeven with the fairly simple mapping described in thesection on the welcome mat, the behavior was stillnovel enough to engage some of the younger usersfor some time. Most users seemed to grasp the con-cept that the space was, in some way, “aware” of theirpresence.

The concepts of attaching information to physicalobjects, navigating through this information with

hand gestures, and sharing information using the lazySusan were met with widespread understanding andacceptance by all the visitors to the gallery. The speedwith which visitors acquired an understanding of howto use and navigate the interface told us we suc-ceeded in making the interface simple and intuitive.There seemed to be many people who had vagueideas that there had to be a better way to get at in-formation than a monitor and mouse, but no con-cept of what that could be; seeing and using the din-ing table really brought those ideas into focus forthem. Perhaps the best reaction was that of an el-derly museum benefactor who pulled Neil Ger-shenfeld aside and commented that she loved theinstallation and the fact that there were no comput-ers anywhere near the table, because she hated com-puters but she loved being able to look at all the in-formation. With all the fast microprocessors in thedining table installation she was probably closer tomore computing power than she had ever been inher life, but the interface was so simple that it didnot matter, and that more than anything else wasthe desired goal.


Flavia Sparacino built the initial prototype for theinteractive table. The projectors used in the projectwere supplied by NEC Technologies. Furniture Co.,New York, supplied the table. At MoMA, AndrewDavies, project manager and interactive environment

Figure 21 Section of the table showing projectormountings and display paths


producer, and Paul Niebuhr, the director of infor-mation systems, gave invaluable assistance during theinstallation. Greg Van Alstyne, the art director, di-rected the team from NearLife, Inc., that designedthe final interface and handled the content editingand formatting for the project. The images of thegallery floor plan, the projector schematic, and theplace setting and lazy Susan photographs are usedcourtesy of Andrew Davies, Greg Van Alstyne, PaulNiebuhr, and MoMA. The installation would not havebeen possible without the work of Kelly Dobson andJohn Paul Strachan. The technology developmentwas conducted in the Physics and Media Group andsupported by the Things That Think consortium atthe MIT Media Laboratory.

**Trademark or registered trademark of E. I. du Pont de Nemoursand Company, Silicon Graphics, Inc., Sun Microsystems, Inc., Mi-crosoft Corporation, Macromedia Inc., or NEC Home Electron-ics (U.S.A.) Inc.

Cited references and note

1. S. Wilkinson, “Phantom of the Brain Opera,” Electronic Mu-sician 13, No. 1 (1997).

2. J. Paradiso, “The Brain Opera Technology: New Instrumentsand Gestural Sensors for Musical Interaction and Perfor-mance,” Journal of New Music Research 28, No. 2, 130–149(1999).

3. C. R. Wren, F. Sparacino, A. J. Azarbayejani, T. J. Darrell,T. E. Starner, A. Kotani, C. M. Chao, M. Hlavac, K. B. Rus-sell, and A. P. Pentland, “Perceptive Spaces for Performanceand Entertainment: Untethered Interaction Using ComputerVision and Audition,” Applied Artificial Intelligence 11, No.4, 267–284 (1997).

4. H. Ishii and B. Ullmer, “Tangible Bits: Towards SeamlessInterfaces Between People, Bits and Atoms,” Proceedings of

Conference on Human Factors in Computing Systems (CHI’97), Atlanta, March 1997, ACM (1997), pp. 234–241.

5. F. Sparacino, K. Larson, R. MacNeil, G. Davenport, andA. Pentland, “Technologies and Methods for Interactive Ex-hibit Design: From Wireless Object and Body Tracking toWearable Computers,” International Cultural Heritage Infor-matics Meeting (ICHIM 99), Washington, DC, September1999, Archive and Museum Informatics, Pittsburgh, PA(1999).

6. See the Boundary Functions installation by Scott S. Snibbe,

7. R. R. Fletcher, O. Omojola, E. Boyden, and N. Gershenfeld,“Reconfigurable Agile Tag Reader Technologies with Com-bined EAS and RFID Capability,” 2nd Workshop on Auto-matic Identification Advanced Technologies (AutoID ’99),Summit, NJ, October 1999, IEEE, New York (1999), pp. 65–69.

8. J. R. Smith, “Field Mice: Extracting Hand Geometry fromElectric Field Measurements,” IBM Systems Journal 35, Nos.3&4, 587–608 (1996).

9. L. S. Theremin, “Method of an Apparatus for the Gener-ation of Sounds,” U.S. Patent No. 1,661,058 (February 28,1928).

10. T. G. Zimmerman, “Applying Electric Field Sensing to Hu-man-Computer Interfaces,” Proceedings of Conference on Hu-man Factors in Computing Systems (CHI ’95), Denver, 1995,ACM (1995), pp. 280–287.

11. N. Gershenfeld, “Method and Apparatus for ElectromagneticNon-Contact Position Measurement with Respect to One orMore Axes,” U.S. Patent No. 5,247,261 (September 21, 1993).

12. J. D. Jackson, Classical Electrodynamics, John Wiley & Sons,New York (1999).

Accepted for publication May 15, 2000.

Olufemi Omojola MIT Media Laboratory, 20 Ames Street, Cam-bridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Mr. Omojola is a research assistant in the Physics andMedia Group at the MIT Media Laboratory. He received the S.B.

Figure 22 Photograph of gallery during installation


degree in electrical engineering and computer science from MITin 2000 and is currently working toward an M.S. degree. His in-terests are in flexible reconfigurable hardware architectures forsignal processing, and applications for human-computer inter-faces.

E. Rehmi Post MIT Media Laboratory, 20 Ames Street, Cambridge,Massachusetts 02139-4307 (electronic mail: [email protected]).Mr. Post is presently a doctoral candidate in the Physics and Me-dia Group of the MIT Media Laboratory. He received an media arts and sciences from MIT in 1998, and a B.Sc. in phys-ics from the University of Massachusetts, Amherst.

Matthew D. Hancher MIT Media Laboratory, 20 Ames Street,Cambridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Mr. Hancher is a research assistant in the Physics and Me-dia Group of the MIT Media Laboratory and a candidate for anM.S. degree. Mr. Hancher’s research focuses on embedded net-working and autonomous robotics.

Yael Maguire MIT Media Laboratory, 20 Ames Street, Cambridge,Massachusetts 02139-4307 (electronic mail: [email protected]).Mr. Maguire is a Ph.D. candidate at the MIT Media Laboratory.He is an IBM Student Fellow holding a master’s degree in mediaarts and sciences from MIT. Mr. Maguire received a degree inengineering physics with Highest Honors from Queen’s Univer-sity in Canada. He is currently working on building a tabletopdevice for quantum computing, which holds the promise of com-puting faster than traditional, semiconductor-based computers.He currently holds an NSERC fellowship from Canada and isgenerally interested in the fundamental ties between informationprocessing and physics.

Ravikanth Pappu MIT Media Laboratory, 20 Ames Street, Cam-bridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Mr. Pappu is an IBM Fellow in the Physics and MediaGroup at the MIT Media Laboratory. His doctoral work focuseson designing and implementing inexpensive systems for crypto-graphic authentication schemes. Prior to this, he worked on al-gorithms for computing holograms for the MIT second-gener-ation holographic video system and co-created the first dynamicholographic video system with haptic interaction. Mr. Pappu holdsa bachelor’s degree in electronics and communication engineer-ing from Osmania University, India. He received a master’s de-gree in electrical engineering from Villanova University in 1993,and a master of science in media arts and sciences from MIT in1995.

Bernd Schoner MIT Media Laboratory, 20 Ames Street, Cam-bridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Mr. Schoner holds engineering degrees from the Uni-versity of Technology in Aachen, Germany, and the Ecole Cen-trale de Paris, France. He is currently a Ph.D. candidate at theMIT Media Laboratory, working on the sensing, modeling, andsynthesis of acoustical instruments.

Peter R. Russo MIT Media Laboratory, 20 Ames Street, Cam-bridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Mr. Russo is currently a sophomore at MIT studyingphysics and electrical engineering. He is an undergraduate re-searcher at the MIT Media Laboratory, where he works on con-ductive fabric touch sensors, and near-field electric and magneticfield sensors.

Richard Fletcher MIT Media Laboratory, 20 Ames Street, Cam-bridge, Massachusetts 02139-4307 (electronic mail: [email protected]). Mr. Fletcher is a Ph.D. candidate in the Physics andMedia Group at the MIT Media Laboratory, working in the fieldof electromagnetic tagging. His current thesis research is devotedto developing new ways of encoding information in smart ma-terial structures. Prior to coming to the Media Lab, Mr. Fletcherworked for five years at the U.S. Air Force Materials Laboratorydesigning passive microwave structures. He holds undergradu-ate degrees from MIT in physics and electrical engineering.

Neil Gershenfeld MIT Media Laboratory, 20 Ames Street,Cambridge, Massachusetts 02139-4307. Dr. Gershenfeld leads thePhysics and Media Group at the MIT Media Laboratory and di-rects the Things That Think research consortium. His laboratoryinvestigates the relationship between the content of informationand its physical representation, from developing molecular quan-tum computers, to smart furniture, to virtuosic musical instru-ments. Author of the books When Things Start to Think, The Na-ture of Mathematical Modeling, and The Physics of InformationTechnology, Dr. Gershenfeld has a B.A. degree in physics withHigh Honors from Swarthmore College, a Ph.D. from CornellUniversity, was a Junior Fellow of the Harvard University So-ciety of Fellows, and a member of the research staff at AT&TBell Laboratories.