the metaverse – a networked collection of...

6
IPT/EGVE ’03 (Paper Submission) (Guest Editors) Volume 0 (2003), Number 0 The Metaverse – A networked collection of inexpensive, self-configuring, immersive environments (Extended Abstract) C. Jaynes, W. B. Seales, K. Calvert, Z. Fei, J. Griffioen Laboratory for Advanced Networking Department of Computer Science University of Kentucky Lexington, KY 40506 1. Introduction The Internet and Web have fundamentally changed the ways in which people communicate, learn, interact, share infor- mation, conduct business, debate issues, etc. However, de- spite these impressive scientific and technological advances, the primary modes of computer-based communication and collaboration remain largely unchanged. Users still inter- face with computer systems via the conventional keyboard, mouse, and monitor/windowing system and communicate with one another using 20+ year old mechanisms such as electronic mail and newsgroups. Even new and exciting ca- pabilities, such as (multi-party) video-conferencing 12 1 and real-time navigation of 3D models 64 have had a limited ef- fect because they are constrained to the conventional key- board/monitor interface and are often subject to inadequate network support/bandwidth resulting in disappointing inter- actions (blurry pictures, annoying pauses and skips, sluggish response, postage-stamp size video, etc.). Recently, some emerging technologies have broken free from the conventional interface, such as “virtual reality” sys- tems, “augmented reality” systems, and “immersive” sys- tems 7 21 18 19 8 14 13 15 27 16 2 . These systems deliver truly amazing sensory experiences that go beyond conventional PC interface. However, they are difficult to install, config- ure, calibrate, and maintain. Furthermore, by design, these systems have very strict physical space requirements (e.g., CAVEs require flat, backlit, wall surfaces of a particular size). These systems are all built from special-purpose hard- ware components, ranging in cost from expensive to very expensive. In many cases, the immersive environment is stand-alone, incapable of communicating with other envi- ronments. In a few cases, communication is supported be- tween immersive environments and requires exceptionally high-bandwidth with QoS network support to blast video from one immersive environment to another. Also, commu- nication is typically allowed only between “identical” envi- ronments since the video being transmitted would not pro- vide the correct “perception” in a dissimilar environment. Thus, these systems are collaborative only to a limited ex- tent, and the scale of the collaboration is typically limited to one other environment (i.e., a point-to-point link between environments). They are also too expensive, large, or com- plex to be used by the typical computer user working in an office, classroom, lab, or at home. 2. The Metaverse Approach Our research is exploring a novel, flexible, and inexpensive approach to the design of future collaborative immersive en- vironments. In particular, we are developing scalable, self- calibrating, immersive projector-based displays that are ver- tically integrated with advanced network protocols to sup- port new collaboration models. We call the resulting system the Metaverse . The objective of the Metaverse is to pro- vide users with an open, untethered, immersive environment that fools their visual senses into believing that the tradi- tional barriers of time and space have been removed. Users access this meta-world through an interface called a Meta- verse Display Portal that is (1) visually immersive, (2) self- configuring and monitoring, (3) interactive, and (4) collab- orative An environment that supports such interaction is im- The Metaverse was first used in Neil Stephenson’s landmark Sci- ence Fiction novel Snowcrash 25 to refer to a similar immersive en- vironment. submitted to EUROGRAPHICS 2003.

Upload: others

Post on 15-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Metaverse – A networked collection of …protocols.netlab.uky.edu/~fei/pubs/ipt-egve.pdfsub-pixel matchpoint selection in an active display. For de-tails regarding this process,

IPT/EGVE’03 (PaperSubmission)(GuestEditors)

Volume0 (2003), Number0

The Metaverse– A networkedcollectionof inexpensive,self-configuring, immersive envir onments

(Extended Abstract)

C. Jaynes,W. B. Seales,K. Calvert,Z. Fei,J.Griffioen

Laboratoryfor AdvancedNetworkingDepartmentof ComputerScience

Universityof KentuckyLexington,KY 40506

1. Intr oduction

TheInternetandWebhavefundamentallychangedthewaysin which peoplecommunicate,learn, interact,shareinfor-mation,conductbusiness,debateissues,etc. However, de-spitetheseimpressivescientificandtechnologicaladvances,the primary modesof computer-basedcommunicationandcollaborationremain largely unchanged.Usersstill inter-facewith computersystemsvia theconventionalkeyboard,mouse,and monitor/windowing systemand communicatewith one anotherusing 20+ year old mechanismssuchaselectronicmail andnewsgroups.Evennew andexciting ca-pabilities,suchas(multi-party)video-conferencing12� 1 andreal-timenavigationof 3D models6� 4 have hada limited ef-fect becausethey are constrainedto the conventionalkey-board/monitorinterfaceandareoftensubjectto inadequatenetwork support/bandwidthresultingin disappointinginter-actions(blurry pictures,annoying pausesandskips,sluggishresponse,postage-stampsizevideo,etc.).

Recently, someemerging technologieshave broken freefrom theconventionalinterface,suchas“virtual reality” sys-tems,“augmentedreality” systems,and “immersive” sys-tems 7� 21� 18� 19� 8� 14� 13� 15� 27� 16� 2. Thesesystemsdeliver trulyamazingsensoryexperiencesthat go beyond conventionalPC interface.However, they aredifficult to install, config-ure, calibrate,andmaintain.Furthermore,by design,thesesystemshave very strict physical spacerequirements(e.g.,CAVEs require flat, backlit, wall surfacesof a particularsize).Thesesystemsareall built from special-purposehard-ware components,rangingin cost from expensive to veryexpensive. In many cases,the immersive environment isstand-alone,incapableof communicatingwith other envi-ronments.In a few cases,communicationis supportedbe-

tweenimmersive environmentsand requiresexceptionallyhigh-bandwidthwith QoS network supportto blast videofrom oneimmersive environmentto another. Also, commu-nicationis typically allowedonly between“identical” envi-ronmentssincethe video beingtransmittedwould not pro-vide the correct “perception” in a dissimilar environment.Thus,thesesystemsarecollaborative only to a limited ex-tent, and the scaleof the collaborationis typically limitedto oneotherenvironment(i.e.,a point-to-pointlink betweenenvironments).They arealsotoo expensive, large,or com-plex to beusedby the typical computeruserworking in anoffice,classroom,lab,or athome.

2. The MetaverseApproach

Our researchis exploring a novel, flexible, andinexpensiveapproachto thedesignof futurecollaborative immersiveen-vironments.In particular, we aredevelopingscalable,self-calibrating,immersive projector-baseddisplaysthatarever-tically integratedwith advancednetwork protocolsto sup-port new collaborationmodels.We call theresultingsystemthe Metaverse

�. The objective of the Metaverseis to pro-

videuserswith anopen,untethered,immersiveenvironmentthat fools their visual sensesinto believing that the tradi-tional barriersof time andspacehave beenremoved.Usersaccessthis meta-world throughan interfacecalleda Meta-verseDisplayPortal that is (1) visually immersive, (2) self-configuringandmonitoring, (3) interactive, and(4) collab-orativeAn environmentthatsupportssuchinteractionis im-

�TheMetaversewasfirst usedin Neil Stephenson’s landmarkSci-

enceFiction novel Snowcrash25 to referto a similar immersive en-vironment.

submittedto EUROGRAPHICS2003.

Page 2: The Metaverse – A networked collection of …protocols.netlab.uky.edu/~fei/pubs/ipt-egve.pdfsub-pixel matchpoint selection in an active display. For de-tails regarding this process,

2 Universityof Kentucky / TheMetaverse

possiblewithoutspecialpurposecomputernetworks,andweusethe term Digital Media Networksto highlight the factthat the computernetwork is a critical componentin sup-portingcollaborativevisually immersive applications.

Unlike existing immersive designs,we aredesigninganimmersive environment (portal) that can be usedin bothhigh-endenvironmentssuchas carefully designedCAVEsand in low-end environmentssuchas a user’s office (andanything in between).Eachportal consistsof an arbitrarynumberof metaverseelements(METELs),constructedfrominexpensive off-the-shelf components.Each METEL in-cludesa renderingclient (PC), a network card,a graphicsaccelerator, anda high-resolutionprojector. The Metaverseelementsareself-calibratingandthusautomaticallyconfig-ure themselvesinto a coherentimmersive interface,regard-lessof thenumberof elementsusedor their location.Conse-quently, new METELscanbeaddedor removedquickly andeasilyto increaseor decreasethe“size” of theportalandthesystemwill automaticallyreconfiguresitself. BecausetheMetaverseelementsare vertically integratedwith the net-work, eachMETEL automaticallydeterminesthe portal towhich it belongsandknows how to communicatewith ele-mentsin otherportals.As a result,large-scalesystemswithmany portalsof arbitrarysizescanbequickly installedandconfigured.

Figure1: Metaverse Lab: (a) arbitrarily placedoverlappingprojectors are (b) automaticallycalibratedandblendedto-gether.

We have implementeda working prototypethat demon-stratesthe flexibility , scalability, androbustnessof our de-sign (i.e., self-calibration,auto configuration,and realtimeadaptationto unexpectedchanges).Our currentimplemen-tation consistsof 24 Metaverseelements(projectorsandcameras– seeFigure 1) that are arbitrarily placedin anenvironmentof non-flatsurfaces.Using feedbackfrom thecamerasthesystemautomaticallyself-calibrates,blendssur-facestogetherwith sub-pixel accuracy, andadjustsfor non-planar surfaces.The systemsupportsapplicationswrittenusingcommonrenderingsoftware(e.g.,openGLprograms)aswell asour own “in-house” 3-D modelingsoftware.Wearein theprocessof incorporatingsupportfor VR Juggler3

which will allow usto supporthapticdevicesaswell.

Theremainderof thisextendedabstractdescribesthespe-cific researchproblemswe areaddressingin theMetaverse

Project including auto-calibrationand blending with sub-pixel accuracy, andlocal- andwide- areanetwork support.We briefly describesomeinitial results.Eachof thesesec-tions will be extendedin the full versionof this extendedabstract.

2.1. Self-Calibration of CooperativeDisplays

A fundamentaldifferencebetweenthe focus of the Meta-verseprojectandsimilar researchprogramsis the integra-tion of sensorswith the display environment.By continu-ously observingthedisplay, the systemself-calibrates,cor-rectingfor photometricandcolorimetricdifferencesbetweendevices,andremoving distortionsintroducedby non-flatandnonuniformdisplaysurfaces.In additionto calibration,thecamerainformation combinedwith positional tracking isusedto accuratelyestimatethe position of a viewer in or-derto correctlypre-warptheprojectedimages23� 29 to renderthemcorrectlyfor thecurrentviewing angle.

The ability to self-calibrateis crucial to our designbe-causeit allowsMetaverseelementsto bedynamicallyaddedor removed from thesystemwithout theneedto physicallyalign or calibrate the mounting structure.Metaverseele-mentscan be addedto a display environment in order toincreaseavailableresolution,contrastratio,andsurfaceareacoveragewith little usereffort. As elementsareadded(or re-moved)from alogicaldisplay, they communicatetheirpres-enceto otherelementsvia thenetwork.

Becausethereareno a priori constraintson theposition-ing of theelements,several issuesarise.Non-flatprojectionsurfaceswarp the projectedimagery. Non-orthogonalpro-jections to surfacesinducea “keystone"effect due to theprojectivetransformation.Arbitrary overlapmustalsobeau-tomaticallyidentifiedto achieve thecorrectoverall blendedgeometricimageandconstantillumination.Theseproblemsarisefrom the extrinsic positioningof eachdevice with re-spectto all otherdevicesin thesystemaswell asthepositionof eachdevice with respectto the displaysurface.Displaycalibration,thenmustdiscover theserelativepositionsin or-derto correctfor theproblems.

Furthermore,intrinsic differencesin the devicessuchascolor balance,resolution,and contrastratio must be ac-countedfor in orderto produceaseamlessdisplay. Usingthecollective feedbackfrom the camerasof the variousMeta-verseelementsallows us to addresseachof theseissuesinanelegantanddynamicway.

Our approachusescamerasto captureboth the intrinsicparametersof thesystem(e.g.,resolution,aspectratio,pixelsize,radiometricproperties,etc)aswell astheextrinsic pa-rameters(e.g.,relativepositionandorientation).

2.2. Calibration Details

Calibrationinvolvesbothgeometricandcolorimetricanaly-sis.Thegoalof geometriccalibrationis to recover therela-

submittedto EUROGRAPHICS2003.

Page 3: The Metaverse – A networked collection of …protocols.netlab.uky.edu/~fei/pubs/ipt-egve.pdfsub-pixel matchpoint selection in an active display. For de-tails regarding this process,

Universityof Kentucky / TheMetaverse 3

tivegeometryof eachdevicewithin thedisplay. Colorimetriccalibrationis usedto modelthedifferencebetweenrenderedimageryin eachprojectorand the observed imagein eachcamera.

Geometriccalibrationis a two phasedprocess.Initially,a singlebasecamera in thedisplayis calibratedto a knowntargetin theworldcoordinatesystem.Oncethiscamera’spo-sitionin theworld is known,asecondphasecomputetherel-ative positionof all overlappingdevices.Theshortestpath,in termsof calibrationerror, betweenany device, and thebasecamera,canbecomputedandyieldstheabsoluteposi-tion of thatdevice within thedisplay28.

Oncegeometriccalibrationis complete,the spectralre-sponsedifferencesbetweeneachprojectorandcameraareestimatedby iteratively projectingdifferentknown intensi-tiesandmeasuringthe intensitycapturedat eachcamerainthe display. Spectralresponseis measuredfor eachcolorchannelso that moreaccuratecolor predictionin the cam-erareferenceframeis possible.

Therehave beena numberof researcherswho have usedthe controllablenatureof a projector and camerapair torecover calibration information 26� 5� 10� 20 and several dif-ferentcalibrationtechniqueshave beenexplicitly designedfor front-projectiondisplayenvironments.In the interestofreadability, wepresentonesuchcalibrationtechniquefor thecasein which the display surfaceis piecewise-planar. Theplanarassumptionis not a requirement,however, andothercalibrationtechniquesto derive a point-wisemappingbe-tweenimageandframebuffer pixelscouldbeused22.

If weassumethatthedevicesobserveaplane,thecalibra-tion problembecomesa matterof finding thecollineationAsuchthat:

p̃ j � Api (1)

for all pointspi in thecameraandall p j in theprojector.BecauseA is a planarprojective transform(a collineationin P2) it canbe determinedup to an unknown scalefactorλ, by four pairsof matchingpointsin generalconfiguration.Iteratively projectingarandompoint from theprojectorontothe displaysurfaceandobservingthat point in the camerageneratesmatchingpoints.

In otherwork, we have introduceda methodfor accurate,sub-pixel matchpointselectionin anactive display. For de-tails regarding this process,aswell asan empiricalanaly-sis of matchpoint(andultimately calibration)accuracy, thereaderis referredto 24. Herewe provide anoverview of theprocess.

The sub-pixel locationof eachmatchpointcenterin thecameraframe is estimatedby fitting a 2D Gaussian,dis-torted by an unknown homography, to observed greyscale

responsein the camera.The 2D Gaussianfunction is gov-ernedby two parameters(meanandvariance),andthe dis-tortion parametersaretheeightindependentvaluesof a dis-torting homography. Initially, a boundingbox is fit to thedetectedblobwhosecenterandsizeprovidestheinitial esti-matefor theGaussianmeanandstandarddeviation respec-tively andwhosedistortionprovidesthe initial estimateofthe unknown homography matrix. All ten parametersarethen optimizedso as to minimize the sum of the squareddistancesbetweentheobservedblobpixelsandthedistortedGaussianpredictedby the unknown parameters.This tech-niquehasbeenshown to providesub-pixel estimatesthatareaccurateto within 1/4pixels24.

The resultingsub-pixel camerapixel is thenstoredwithits matchingprojectorpixel p j . Given at leastfour randompairs(for asetof degeneratecasessee),wecomputeA up toanunknownscalefactorλ. WecurrentlycomputeA using10matchingpairswhichhasprovento besufficientempirically.Theaccuracy of therecoveredA canbemeasuredasa pixelprojectionerroron theprojector’s framebuffer for anumberof matchingpoints.Specifically, we make calibrationerrorestimatesby illuminating thescenewith a known projectorpixel p, observingits correspondingpositionin thecamera,andthencomputinga (sub)pixel difference:

ε �N

∑i

���p � Ap

��� 2 (2)

In ourexperiments,ε is measuredby generating50pointsin the projector frame and calculatingprojection error inthe cameraas in Equation2. To improve calibrationaccu-racy, we employ a Monte Carlo techniquethat estimatesAover many trials of randomlygeneratedmatchpoints andmeasuresε for eachtrial. TherecoveredA that leadsto thesmallestε is retained.Experimentationrevealsthat, for oursituation,tentrials areusuallysufficient to recover accuratecalibration.Meanre-projectionerroris reducedto sub-pixelaccuracy, typically between0.3and0.5pixels.

Usingthis “daisy-chaining"approachto calibrationis notwithout problemshowever. Althougha singleprojectorpaircan be relatively calibratedto less than a pixel accuracy,propagationof errorcanaccumulateacrossthedisplay. Forprojectorsthatarefarfrom theorigin of theworld coordinatesystemandthebasecamerathatobservesit, accumulationoferrorcanleadto calibrationproblems.For our 24 projectordisplay, we have observedanerrorof 3-5 pixels for projec-tors on the periphery. Addressingthis problemis a subjectof ourcurrentresearch.

Figure 2 shows a 24-projectordisplay. Once the basecamerais calibrated,full calibrationof the display can beachievedin approximately20minutes.Figure2 depictscali-brationaccuracy by instructingthedisplayto rendera setofuniformgridsin theworld frameof reference.

submittedto EUROGRAPHICS2003.

Page 4: The Metaverse – A networked collection of …protocols.netlab.uky.edu/~fei/pubs/ipt-egve.pdfsub-pixel matchpoint selection in an active display. For de-tails regarding this process,

4 Universityof Kentucky / TheMetaverse

Figure 2: Auto-calibration of the Display: A grid pattern,drawnin theworld coordinatesystemdemonstratescalibra-tion accuracy.

3. Network Support

The ability to dynamicallyaddnew METELs to the Meta-verseresultsin superiorflexibility andscalabilityover ex-isting approaches.However, unlike systemsbasedon high-end multiprocessormachines(e.g., SGI Onyx) where theprocessingis tightly coupled,METELs are loosely con-nectedvia a conventional(andinexpensive) local areanet-work (e.g., 100 Mbps ethernet).Consequently, scalingupto large systemsrequiresefficient local areanetwork pro-tocols. Furthermore,to supportcollaborationwith remoteMetaverseportals,efficient wide areanetwork protocolsareneeded.

Our currentlocal areacommunicationprotocolemployspre-cachingand multicast synchronizationto achieve thetypesof framerateswe desire(i.e.,up to 60 fps).Giventhelimited local network bandwidth,static information aboutthe entire3D model is pre-cachedat eachof the METELs.Consequently, only therenderingcommandsneedto besentat runtime.This is similar to theapproachtakenby systemslike Chromium9 andVR Juggler3. However, to achieve adistributedform of gen-lock,theprotocolusesa two-phasecommit to ensureimagesare renderedat the sametimeacrossall METELs. To prevent unnecessarytraffic andun-necessaryor uncoordinatedrendering,acentralcontrolnodewaitsuntil all METELsarereadyto renderbeforeprovidingthesyncsignal(via a singlemulticastpacket) thatgivesthe“go aheadto render”.To avoid implosionathighframerates,thecontrolnodeusesak outof n approachto decidewhenitis ok to proceed,wherek is basedon thecurrenttraffic load.

In additionto synchronizationbetweentheMETELs, lo-cal communicationis usedto distributeuserinput andotherinformationrelevant to the displaysuchas the tracked po-sition of a user. User input to the displayfrom a mouseorkeyboard,for example,mustbe transmittedto all METELsso that the all devices can behave accordingly. User inputclientsandotherdevicessuchashead-trackersprovideinputto thedisplayby connectingto themulticastserver. Packetscontainaheaderthatdescribesthedatato bedistributedfol-

lowedby thedataitself andaresentto theserverasthey be-comeavailable.On thenext multicastsynchronizationthesepacketsaresentin aggregateto all METELs responsibleforprocessingthem.

BecausecollaborationbetweenMetaverseportalsspan-ningwideareanetworksis susceptibleto congestionandar-bitrary packet delays,we aredevelopinglightweight routermechanisms(that canbe executedat raw line rates)to al-low end systems(i.e., in this case,Metaverseportals) tocontrol the way packetsarehandledinsidethe network. Inparticular, wearedevelopingtwo companionservicescalledEphemeral StateProcessing (ESP)and LightweightPro-cessingModules(LWP).

The ESPnetwork serviceallows applicationsto deposit,operateon,andlaterretrievesmallpiecesof data(values)atnetwork routers.Thescalabilityof theservicederivesfromthe fact that the datahasa small fixed lifetime, say10 sec-onds,afterwhich it is automaticallyremoved.Becausedatais removed automatically, thereis no needfor explicit con-trol messagesto destroy or managethe stateat the variousrouters.Moreover, datastoredat routersis uniquelyidenti-fied by an application-selected(64 bit) tag value.Becausethe tag spaceis large andvaluesareremoved after a shortperiodof time, it is impracticalfor a userto guessanotheruser’s tags,resultingin theillusion thateachapplicationhasa “privatestore”. All this occurswithout any managementoverhead.Using this service,we have shown that endsys-temscanaccuratelyidentify boththepoint of congestioninthenetwork andthespecificlevel of congestion31.

The secondservice,LWP, allows applicationsto enablevery simpleprocessingcapabilitiesat specificroutersin thenetwork. Currentprocessingfunctionsincludepacket dupli-cation, packet filtering, packet redirection, and packet re-ordering. Unlike other active network approachesfor en-ablingnew servicesat routers,LWPonly supportsavery re-strictivesetof (parameterized)processingmodules.Becausetheprocessingis simple,LWPmodulescanbeimplementedin hardwareto operateat line rates.Moreover, becauseendsystemsenablethefunctionalityvia asecure(encryptedandauthenticated)point-to-point(i.e., direct) connectionto therouter, the service can be made securethereby avoidingthe securityproblemsthat plaguemost active network ap-proaches(i.e., usingpotentiallyuntrustedactive packets toenablenew services).

By combiningESPandLWPtogether, wehaveshown thatwecanimplementapplication-specificmulticastdistributiontrees(usefulfor customizedcommunicationbetweenMeta-verseportals)by first identifying the desiredbranchpointsin the distribution treevia ESPandthenenablingduplica-tion functionsat the desiredroutersvia LWP 30. We havealsoshown thatwecanimplementscalablelayeredmulticastusingthesesametwo services31. Layeredmulticastis partic-ularly usefulwhena Metaverseportalwantsto dynamicallyadjustthetransmissionquality it is receiving from otherpor-

submittedto EUROGRAPHICS2003.

Page 5: The Metaverse – A networked collection of …protocols.netlab.uky.edu/~fei/pubs/ipt-egve.pdfsub-pixel matchpoint selection in an active display. For de-tails regarding this process,

Universityof Kentucky / TheMetaverse 5

tals basedon the current level of congestionreportedbyESP.

4. Resultsand Conclusions

Using our approach,we have deployed threedifferentdis-playenvironmentsandarein theprocessof networkingthemtogetherusingspecializedDigital MediaNetworks.

TheCoRElaboratoryis a 24 projector, 4 cameradisplayenvironmentthat is primarily usedto explore displaycali-bration,reconfigurability, andinteractivedisplaytechniques.Significantnew advancesin theCoREdisplayarealgorithmsthat allow the display to automaticallydetectand removeshadows 11, a techniqueto allow usersto interactively re-orient projectorsin real-timewhile the displayis in use24,anda methodto producesuper-resolutionoverlaysby ex-ploiting projectoroverlapwithin thedisplay17.

A seconddisplayconsistingof 14projectorsandtwocam-erashasbeendeployedfor usewithin aComputationalFluidDynamicslaboratory. Thedisplayis in regularuseby facultyandstudentswho arevisualizingcomplex fluid flow prob-lems.

A third displayhasbeendeployedin theCollegeof Nat-ural Sciencesat the University of PuertoRico andwill beusedfor visualizationin conjunctionwith a digital libraryinitiative there.The displayis composedof four projectorsanda singlecamera.

Theseinitial displayenvironmentsprovide thetestbedforour researchprogramin coredisplaytechnologiesdrawingonproblemsfrom computergraphics,computervision,visu-alization,andhumancomputerinteraction.We have begunto network thesedisplaystogetherwith a focuson verticalintegrationof thenetwork with thedisplaydevices,andspe-cializedprotocolscapableof deliveringmultimediadatabe-tweenthedisplays.

Acknowledgements

This work is supportedin partby NSFgrantsEIA-0101242andANI-0121438.

References

1. TheMboneWebPage.http://www.mbone.com.

2. D. Bennett. Alternate RealitiesCorporation,Durham,NC27703.CitedJuly 1999.http://www.virtual-reality.com/.

3. A. Bierbaum,C. Just,P. Hartling, K. Meinert,A. Baker, andC.Cruz-Neira.Vr juggler:A virtual platformfor virtual realityapplicationdevelopment.In IEEEVirtual Reality, Yokahama,Japan,March2001.

4. E. Chen. QuicktimeVR - An Image-BasedApproachto Vir-tual EnvironmentNavigation. In SIGGRAPH95 ConferenceProceedings, pages29–38,August1995.

5. H. Chen,R. Sukthankar, G. Wallace,andT. Cham. Calibrat-ing scalablemulti-projectordisplaysusingcamerahomogra-phy trees.In ComputerVisionandPatternRecognition, 2001.

6. Volker CoorsandVolker Jung. UsingVRML asan interfaceto the3D datawarehouse.In Don Brutzman,MaureenStone,andMikeMacedonia,editors,VRML98: Third Symposiumonthe Virtual RealityModelingLanguage, New York City, NY,February1998.ACM SIGGRAPH/ ACM SIGCOMM,ACMPress.ISBN 0-58113-022-8.

7. C. Cruz-Neira,D.J. Sandin,and T.A. DeFanti. Surround-screenProjection-basedVirtual Reality: The DesignandIm-plementationof theCAVE. In SIGGRAPH93ConferencePro-ceedings, volume27,pages135–142,August1993.

8. M Czernuszenko, D Pape,D Sandin,T DeFanti,L Dawe,andM Brown. The ImmersaDeskand InfinityWall Projection-BasedVirtual RealityDisplays. In ComputerGraphics, May1997.

9. G. Humphreys, M. Houston, Y. Ng, R. Frank, S. Ahren,P. Kirchner, andJ.Klosowski. Chromium:A streamprocess-ing framework for interactive renderingonclusters.

10. C. Jaynes,S. Webb,andR. M. Steele.A scalableframeworkfor high-resolutionimmersivedisplays.In InternationalJour-nal of theIETE, volume48,August2002.

11. C. Jaynes,S. Webb,R. M. Steele,M. Brown, andB. Seales.Dynamicshadow removal from front projectiondisplays. InIEEEVisualization2001, SanDiego,CA., October.

12. Microsoft. The Microsoft Netmeeting Program.http://www.microsoft.com/netmeeting/.

13. Origin InstrumentsCorporation.http://www.orin.com/.

14. PanoramTechnologies,Inc. http://www.panoramtech.com/.

15. PowerWall. http://www.lcse.umn.edu/research/power-wall/powerwall.html.

16. Pyramid Systems. http://www.fakespace.com(formerlyhttp://www.pyramidsystems.com/).

17. D. RamakrishnanandC. Jaynes.Inversesuper-resolutionre-construction:Superpositionof projectedimageryin theframe-buffer optique. In Submittedto: InternationalConferenceonComputerVision andPatternRecognition, 2003.

18. R. Raskar, M Brown, Y Ruigang, W Chen, G Welch,H Towles, B Seales,andH. Fuchs. Multiprojector DisplaysusingCamera-basedRegistration. In IEEE Visualization, SanFrancisco,CA, October1999.

19. R.Raskar, M. Brown,R.Yang,W. Chen,G.Welch,H. Towles,G. Seales,and H. Fuchs. Multi-projector displays usingcamera-basedregistration.IEEEVisualization’99, 1999.

20. R.Raskar, M. Brown,R.Yang,W. Chen,G.Welch,H. Towles,G. Seales,and H. Fuchs. Multi-projector displays usingcamera-basedregistration.IEEEVisualization’99, 1999.

21. R. Raskar, G. Welch, M. Cutts, A. Lake, L. Stesin, andH. Fuchs. The Office of the Future:A Unified ApproachtoImage-BasedModelingandSpatiallyImmersive Displays. InSIGGRAPH98ConferenceProceedings, July 1998.

submittedto EUROGRAPHICS2003.

Page 6: The Metaverse – A networked collection of …protocols.netlab.uky.edu/~fei/pubs/ipt-egve.pdfsub-pixel matchpoint selection in an active display. For de-tails regarding this process,

6 Universityof Kentucky / TheMetaverse

22. R. Raskar, G. Welch,andH. Fuchs.Seamlessprojectionover-lapsusing imagewarpingandintensityblending. In 4th In-ternational Conferenceon Virutal Systemsand Multimedia,November1998.

23. B. Seales,G. Welch,andC. Jaynes.Real-timedepthwarpingfor 3-d scenereconstruction.In 1999IEEE AerospaceCon-ference, 1999.

24. R. M. Steeleand C. Jaynes. Iterative estimationfor activesubpixel calibration:A statisticalanalysisof accuracy andbe-havior. In Submittedto: InternationalWorkshoponStatisticalAnalysisin ComputerVision, in conjuntionwith CVPR2003.

25. N. Stephenson.SnowCrash. SpectraBooks,1993.

26. R. Surati. ScalableSelf-Calibrating Display Technology forSeamlessLarge-ScaleDisplays. PhD thesis,ComputerSci-enceandElectricalEngineeringDepartment,MassachussettsInstituteof Technology, 1999.

27. TrimensionSystemsLtd. http://www.trimension-inc.com/.

28. R. Tsai.Synopsisof recentprogressoncamera calibration for3D machinevision, pages147–159.MIT Press,1989.

29. G. Welch, G. Bishop,L. Vicci, S. Brumback,K. Keller, andD. Colucci. The hiball tracker: High-performancewide-areatrackingfor virtual andaugmentedenvironments.In Proceed-ngsof the ACM Symposiumon Virtual RealitySoftware andTechnology 1999(VRST99), 1999.

30. S. Wen,J. Griffioen,andK. Calvert. Building MulticastSer-vicesfrom UnicastForwardingandEphemeralState.In Pro-ceedingsof theOpenArch 2001Conference, April 2001.

31. Su Wen, JamesGriffioen, and KennethL. Calvert. CALM:Congestion-AwareLayeredMulticast. In Proceedingsof the5th InternationalConferenceon OpenArchitecturesandNet-work Programming(OPENARCH’02), pages179–190,June2002.

submittedto EUROGRAPHICS2003.