vr based visualization and exploration of plant …journal of virtual reality and broadcasting,...

22
Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data Wolfram Schoor *‡ , Felix Bollenbeck * , Thomas Seidl * , Diana Weier , Winfriede Weschke , Bernhard Preim , Udo Seiffert * ,R¨ udiger Mecke * * Fraunhofer Institute for Factory Operation and Automation (IFF) Sandtorstrasse 22, 39106 Magdeburg, Germany phone: +49 391 4090333 email: {wolfram.schoor,felix.bollenbeck,thomas.seidl, ruediger.mecke,udo.seiffert}@iff.fraunhofer.de www: www.iff.fraunhofer.de Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) Corrensstrasse 3, 06466 Gatersleben, Germany phone: +49 39482 50 email: {weier,weschke}@ipk-gatersleben.de www: www.ipk-gatersleben.de Otto von Guericke University, Universit¨ atsplatz 2, 39106 Magdeburg, Germany phone: +49 391 6718772 email: [email protected] www: www.uni-magdeburg.de Abstract This paper investigates the use of virtual reality (VR) technologies to facilitate the analysis of plant bio- logical data in distinctive steps in the application pipeline. Reconstructed three-dimensional biological models (primary polygonal models) transferred to a virtual environment support scientists’ collaborative exploration of biological datasets so that they obtain Digital Peer Publishing License Any party may pass on this work by electronic means and make it available for download under the terms and conditions of the current version of the Digital Peer Publishing License (DPPL). The text of the license may be accessed and retrieved via Internet at http://www.dipp.nrw.de/. First presented at VRIC’2008, extended and generalized for JVRB accurate analysis results and uncover information hid- den in the data. Examples of the use of virtual real- ity in practice are provided and a complementary user study was performed. Keywords: model exploration, biological data, plant visualization, immersive VR 1 Introduction Three-dimensional contents open completely new pos- sibilities to explore information, extract knowledge and deliver presentations in virtual environments. While three-dimensional and interactive representa- tions of virtual prototypes are widespread in various domains of industry such as engineering and design, these kind of interactive 3-D models are rarely used for crop plant research. In the field of design, for ex- ample, the added value of 3-D information can already dramatically increase the effectiveness of work in the urn:nbn:de:0009-6-22579, ISSN 1860-2037

Upload: others

Post on 20-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

VR Based Visualization and Exploration of Plant Biological Data

Wolfram Schoor∗ ‡, Felix Bollenbeck∗, Thomas Seidl∗, Diana Weier†,Winfriede Weschke†, Bernhard Preim‡, Udo Seiffert∗, Rudiger Mecke∗

∗ Fraunhofer Institute for Factory Operation and Automation (IFF)Sandtorstrasse 22,

39106 Magdeburg, Germanyphone: +49 391 4090333

email: {wolfram.schoor,felix.bollenbeck,thomas.seidl,ruediger.mecke,udo.seiffert}@iff.fraunhofer.de

www: www.iff.fraunhofer.de† Leibniz Institute of Plant Genetics and Crop Plant Research (IPK)

Corrensstrasse 3,06466 Gatersleben, Germany

phone: +49 39482 50email: {weier,weschke}@ipk-gatersleben.de

www: www.ipk-gatersleben.de‡ Otto von Guericke University,

Universitatsplatz 2,39106 Magdeburg, Germany

phone: +49 391 6718772email: [email protected]

www: www.uni-magdeburg.de

Abstract

This paper investigates the use of virtual reality (VR)technologies to facilitate the analysis of plant bio-logical data in distinctive steps in the applicationpipeline. Reconstructed three-dimensional biologicalmodels (primary polygonal models) transferred to avirtual environment support scientists’ collaborativeexploration of biological datasets so that they obtain

Digital Peer Publishing LicenseAny party may pass on this work by electronicmeans and make it available for download underthe terms and conditions of the current versionof the Digital Peer Publishing License (DPPL).The text of the license may be accessed andretrieved via Internet athttp://www.dipp.nrw.de/.First presented at VRIC’2008, extended and generalizedfor JVRB

accurate analysis results and uncover information hid-den in the data. Examples of the use of virtual real-ity in practice are provided and a complementary userstudy was performed.

Keywords: model exploration, biological data, plantvisualization, immersive VR

1 Introduction

Three-dimensional contents open completely new pos-sibilities to explore information, extract knowledgeand deliver presentations in virtual environments.While three-dimensional and interactive representa-tions of virtual prototypes are widespread in variousdomains of industry such as engineering and design,these kind of interactive 3-D models are rarely usedfor crop plant research. In the field of design, for ex-ample, the added value of 3-D information can alreadydramatically increase the effectiveness of work in the

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 2: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

prototype design phase. Data visualization and explo-ration in virtual environments, based on such interac-tive 3-D models, likewise will become extremely im-portant in crop plant research in the future. This isdue to the fact that answers to key questions regard-ing functional relations are hidden in the 3-D struc-tures or architecture of plants. New possibilities areenabling crop plant researchers to analyze scientificdata in immersive environments. New insights in theanalysis of biological data help optimize plant growthand crop yields. This is instrumental to improve farm-ing and could contribute to reducing the problem ofworld hunger.

2 Related Work

A brief overview of existing VR technologies and theirfeatures is followed by a short introduction to basicbiomedical methods applied to data pools and concreteexamples of VR applications development for plantsor plant parts.

2.1 VR Display Systems

The types of presentation in use are as diverse asthe multitude of fields of application for data explo-ration and visualization with virtual interactive envi-ronments. Adapted from a classification applied byBIOCA AND LEVY [BL95] and based on [Ber04],three basic forms of visual VR presentation exist:

• Immersive VR: Users are completely or al-most completely integrated in a virtual environ-ment, e.g. CAVE or head-mounted displays (see[Lan07] and [CR06] for a survey). High acquisi-tion costs and locational constraints on the tech-nology have made its commercial use difficult inthe past. Providing immersive VR as service tosmall and medium-sized enterprises would makesuch solutions financially more attractive, espe-cially when a larger group of users can use thesesystems simultaneously. Figure 1 presents an ex-ample of a visual grain model inspection.

• Semi-immersive VR: Output imply devices suchas vision stations, (stereoscopic) wall screenswith up to 220 megapixels such as HIPerSpace(www.ucsd.edu) or single chip stereo projec-tors [HHF07] are filling most of the field of vi-sion. This form of presentation generates a highlevel of user immersion and may be used by

Figure 1: Interactive visualization of three-dimensional biological models in a CAVE.

small user groups. Declining hardware priceswill make it increasingly important in the near fu-ture. Figure 2 presents a semi-immersive display,the “immersive engineering workstation” with abiological scenario.

Figure 2: Visualizing grain structures in a semi-immersive presentation.

• Desktop VR: The presentation only fills a cer-tain region of the field of vision (typicallyless than half of the field of vision, see Fig-ure 3). 3-D displays such as Free2c (www.hhi.fraunhofer.de) are also possible. This pre-sentation generally takes the form of a single usersystem for continuous use.

For additional information on VR display technologiessee [Hof05].

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 3: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Figure 3: Conventional desktop VR with an activestereo system.

2.2 Related Biological Work

Three-dimensionally resolved image data, tradition-ally on a macroscopic scale in medical applications,have become more and more available to life sciencesand biology on microscopic or even molecular resolu-tion, often delivering stunning insights into structuresand function.The observation of structure and function in its nat-ural spatial or spatio-temporal extension is not onlymore intuitive to investigators, but often elucidates theunderlying principles of dynamics and mechanisms inliving matter, e.g. [JLM+08] deliver impressive exam-ples of proving assumptions about intra-cellular motormechanisms by 3-D image analysis and visualization.While in medical applications a standard for imag-ing devices and acquisition exists, 3-D imaging in bi-ology is highly application-specific. Non-destructivebio-imaging methods such as laser scanning mi-croscopy, X-ray or magnetic resonance imaging havebeen employed to generate models of insects [PH04],[KHKB07] and plants [Gli06], [BMH00] on micro-scopic scales, either by direct intensity-based visual-izations or further processing such as tissue or organlabeling.Destructive methods overcome limitations in resolu-tion and sample depth by serially sectioning the objectallowing analysis of histological structure and func-tional data, such as gene-specific expression levels[RBB+94], [GDB+07], with comprehensive demandsin image reconstruction on the downside.While sample preparation and wet-experiments arelabor- and resource-intensive, careful analysis and vi-sualization of biological 3-D images has become moreand more important in the past. Standard or device-shipped software often seems to provide insufficientfunctionality for specific tasks, resulting in an increas-

ing number of published software and comprehensivelibraries for biomedical image analysis and visualiza-tion.A small number of VR applications are in use in plantand crop research. While they are based on VR tech-nologies, they employ different approaches and havedifferent foci. Based on their applications, VR tech-nologies for plant biological objects can be subdividedinto the following categories:

• 3-D model creation using plant or plant part mod-eling techniques [DHL+98] and [LD98] to designrealistic plants for rendering.

• 3-D model acquisition by laser scanning digital-ization techniques as described in [MBST06] toacquire high-resolution geometrical and color in-formation about the plant phenotype.

• Plant or plant part growth simulation is awell established field of research [XZZ06] and[VMdH+03]. The purpose of this research areais to optimize plant growth.

• Plants or plant part analysis is especially appliedto identify regions and formations that controlcrop growth and thus the plant itself. VR ismainly used to visualize the results of an analysis[GDB+07].

While the field of biomedical data research has a va-riety of feasible solutions at its disposal, the field ofplant biology research unfortunately still does not.Hence, algorithms and VR technologies have to beadapted for specific applications.

3 Model Generation

Knowledge of real life objects and specific character-istics of imaging have to be considered to understandthe requirements for the whole workflow from data ac-quisition to the biologists’ conclusions. This processcan be summarized as a model generation pipeline andis presented in Figure 4. Optional and alternative worksteps are indicated by a dashed line.

3.1 Data Acquisition and Storage

Biological material was generated for visual analysis,specifically microtome serial section data of differentstages of barley seed development (days after flower-ing or DAF) for high resolution histological images

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 4: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Figure 4: Processing workflow: The graph shows the interplay of different modules in reconstruction andabstraction towards high-resolution 3-D models. Dashed lines indicate optional processing steps, such as imagestitching. The three main problem domains preprocessing, segmentation and surface reconstruction are boxed.

similar to the method described in [GDB+07].Each dataset consists of approximately 2, 000 sliceimages of a developing barley caryopsis, which wereobtained from a microtome at 3 µm slice thickness.Digitized with a color CCD camera, images originallymeasured 1600 × 1200 pixels at a spatial resolutionof 1.83 × 1.83 µm per pixel. Since color-spaceanalysis revealed an almost linear correlation and alimited range, it was possible to reduce the opticalresolution of 12 bits for each RGB channel to 8 bitgrayscale images without significant loss, reducingthe data volume for each dataset to approximately5 GB. Depending on the developmental stage of thespecimen, sections can be too large to be capturedin a single frame with the given spatial resolutionand imaging setup. As a result, the mosaic imagesare composed from several sub-image scans. Sincesub-image acquisition is not performed in a fixedarrangement, the restored image is computed basedon image contents using a phase-correlation-basedalgorithm (see [Bra65]) and multi-resolution blendingof image intensities proposed in [SHC04].Available data include gene expression assay datautilizing mRNA in-situ hybridization (ISH) probingof histological cross-sections. Whereas in-situ dataallows visualizing spatial gene expression patterns,staining with gene-specific samples requires complexchemical protocols so that the ISH image data origi-nates from different individual grains.Magnetic resonance spectroscopy measurements

(1H-NMR) of caryopses at different points in timewere sampled in a specific device. While beingnon-destructive, the detected proton distribution hasa lower spatial resolution of approximately 10µm(isotropic) per voxel, and does not resolve histologyor structural features (see Figure 5).

Figure 5: NMR imaging data of a developing barleygrain, specifically two volume renderings of intensityvoxel data from different angles and cutting planes.While NMR is a noninvasive imaging, it has a smaller(10µm/voxel) spatial resolution than light microscopy.

Other potential datasets, such as macroarray geneexpression data or metabolomic assays by lasermicrodissection (LMD), could be integrated intothe database to enhance synergy and facilitate theinference of analysis results. Figure 6 presents the

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 5: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

database with different data domains and an exampleimage. Figure 7 shows a pie chart breaking down datadistribution per model. The organization is derivedfrom the entity-attribute-value (EAV) design as in[Anh03]. This approach has the advantage of highflexibility when extending the database with otherdata domains.

Figure 6: Schematic view: Integration of different datadomains by using an EAV model for the visualizationdatabase.

Figure 7: Data distribution for one model, the absolutedata totaling around 9 GB.

3.2 Image Preprocessing

In general, data sampled from real world objects ex-hibits an inevitable level of noise and limited spatialand temporal resolution. Therefore, image process-ing and image analysis is necessary to enhance rele-vant information. Applications have many diverse re-quirements, and solutions for end users are often de-voted to specific questions with little transparency on

algorithms and parameters. While users with biolog-ical backgrounds frequently lack detailed knowledgeof image processing algorithms and theory, consistentand comprehensible approaches are necessary for re-producible and exact results.Eliminating artifacts, distortions and signal noise insampled images, e.g. caused by preparation and han-dling during digitization, is not only beneficial in termsof generating “good” data for subsequent processingand visualization, but is often a preliminary for mea-surements of biological quantities in image data.While the following categories are somewhat arbi-trary, they nevertheless can be considered as promi-nent problems occurring in 3-D biomedical imaging.The processing workflow for high-resolution 3-Dmodels generation from digitized raw data follows theflowchart in Figure 4: While the principles behind thethree main tasks preprocessing, segmentation and sur-face reconstruction are relevant for subsequent visu-alization and navigation, a detailed description of theprocess would go beyond the scope of this paper.Since the full spatial resolution is conserved to the fi-nal surface-modeling, data amounts to be processedare large. The preprocessing of raw images in-cludes image intensity normalization and noise re-moval. While adequate mapping of intensity valuesis computationally cheap, the exact delineation andmasking of the object of interest (see Figure 9) is per-formed using geodesic active contours [CKS97] in hi-erarchical image representation for faster convergence.

Image registration is the process of transformingtwo or more n-D images so that corresponding struc-tures match, i.e. registering images onto a referenceis a prerequisite to collecting and comparing structuralfeatures, and therefore a problem most frequently aris-ing in biomedical image processing prior to any furtheranalysis and visualization, as surveyed in [MV98] and[Bro92].Image registration is generally formulated as an op-timization problem regarding an image metric, maxi-mizing the correspondence of images based on imagegrid point intensities or extracted features such as edgeinformation, employed in quadratic or correlation-based distance formulations. Desired properties, suchas smoothness of the transformation etc. can beenforced by regularizing the objective function withadditional terms evaluating the actual transformation(see [Mod04] for an overview).Registration based on the correspondence of unique

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 6: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

points, i.e. landmarks is difficult since such landmarksare often ambiguous and hard to detect in biomedicalimages.The choice of transformations is driven by the appli-cation, i.e. by the physical process underlying thedistortion of images, necessitating different numericalschemes. This study employs registration techniquesto reconstruct distorted serial section 2-D images tocompose an intact 3-D intensity dataset and integrateimage data from different imaging modalities in a 3-Dmodel. MODERSITZKI [Mod04] extensively discussesnumerical schemes and applications.The 2-D / 3-D reassembly of a serial section dataset re-quires registration of the whole image stack. Since in-core processing of arrays of several gigabytes in sizeis unfeasible, transformations are optimized within alocal range of consecutive stack images as describedin [Mod04]. Resolution hierarchies allow fast androbust optimization schemes that combine exhaustiveand gradient-descent searches, (see [ISNC03] for de-tails). The intact 3-D object is thereby restored com-posing a large voxel dataset showing internal histolog-ical structures (Figure 8).

Figure 8: Serial section data: The three-dimensionalcoherence of histological structures is reconstructedby registering the stack of section images. The figuredepicts a volume rendering of the histological voxeldata and virtual cutting planes.

3.3 Image Segmentation

Segmenting images into homogeneous regions on thebasis of application-specific criteria is a crucial step ofimage analysis.All image segmentation algorithms are intended to de-compose an image into segments with specific mean-ing for a given application [HS85] or, expressed more

Figure 9: Using automated segmentation algorithmsa complete labeling of multiple biologically relevanttissues in images is obtained.

technically, to identify regions that uniformly relate toa specific criterion such as texture or image intensity[Hah05]. The literature, specifically [LM01], [Hah05]and [Kan05], presents a variety of approaches to clas-sification. As it relates to user interaction, here, imagesegmentation methodology is classified according tothe following techniques: manual segmentation, semi-automatic segmentation, and automatic segmentation.Figure 9 shows an example of a complete segmenta-tion of a barley grain slice with its tissue legend.

Manual segmentation: This is the easiest imagesegmentation technique and commonly applied tostructures with widely varying morphologies. The userspecifies a set of points that will be connected after-wards. The contour can additionally be smoothed byfamiliar interpolation methods [Far88].Since numerous segmentation points have to be spec-ified manually, the main disadvantage of this tech-nique is the necessary time and mental effort, and ac-cordingly, these segmentation results are less repro-ducible in terms of inter-rater variability. An exam-ple of manual segmentation in the context of the au-thors work regarding spatial plant models is discussedin [GDB+07].Voxel data is decomposed into subvolumes of specifictissues. The method described in [BS08] enables au-tomatically segmenting the full stack of approximately2, 000 images with a small set of reference images thata biologist has labeled manually or semiautomatically.

Automatic segmentation: Algorithmically address-ing segmentation is generally superior over manualor semiautomatic segmentation for time-saving andnon-subjectiveness. Although a large number of

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 7: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

methods, e.g. graph-based [BSC06] or variationalformulations of the segmentation problem [CKS97],[TJW+03] have been proposed, biomedical imagesegmentation remains difficult. The non-uniquenessof features and structures often necessitates expertknowledge for correct identification and labeling. Thisparticularly holds for histological image data in plantsciences, since here tissues in images discriminate onsmall-scale structures in their structural context due tocell walls (see Figure 8) rather than global properties,such as intensity. The authors have demonstrated inprevious studies that incorporating expert knowledgeis inevitable to achieve exact segmentation results (see[BSS06], [BS08]).Segmentation algorithms incorporating expert-generated reference data as described in[BS08, BWSS09] have proven to deliver highsegmentation accuracy, especially in non-binary seg-mentation scenarios. In [BS08] a registration-basedsegmentation algorithm is described that performsequally well with supervised classification schemesas support vector machines and neural approaches tosegment serial section data into multiple classes basedon a small set of user-generated references.

Semiautomatic segmentation: These approachesare helpful since automatic segmentation resultsmay contain inaccuracies or applied algorithms mayrequire a small set of reference images to solve agiven problem. In some cases, it may be impossibleto use automatic segmentation at all. User-steeredsemiautomatic segmentation algorithms can help toeliminate these drawbacks. Although a multitudeof other semiautomatic segmentation algorithmsexist, the LiveWire method [MB98] and [FUS+98]provides the best method to postprocess only afew misclassifications and prepare a few referencesegmentations. A modified LiveWire technique forfast and accurate segmentation of independent serialsections is presented in [SBH+08].

Semiautomatic segmentation algorithms ideally sup-plement automatic segmentation. The solution pre-sented here is a hybrid approach consisting of auto-matic offline segmentation and the option for users tointervene with semiautomatic segmentation.

3.4 Surface Reconstruction

For a level of abstraction higher than intensity stacksof images, boundaries of the labeled voxel data areconverted into an explicit, geometric representation.The algorithm applied in Section 3.4 – Triangulation –generates triangle-mesh approximations of isosurfacesin one pass, but has the disadvantage of generating alarge number of faces without adapting the geometryto the true isosurface. This makes a subsequent surfacesimplification necessary to reduce the number of faces(see Section 3.4 – Surface reduction and remeshing).The literature supplies a variety of reconstructionmethods, each with different drawbacks. Methods op-erating at the voxel level execute subvoxel resampling,use non-binary segmentation masks or apply segmen-tation with subvoxel accuracy.Other data sources for surface reconstruction includepoint clouds resulting from the intersection of differ-ent models or contour lines as output from the semiau-tomatic segmentation. A number of existing solutionshave already been implemented including the adaptionof the parameters to the specific data characteristic.Commercial applications are also used to reconstructthe 3-D surface. The necessary or optional steps forsurface reconstruction are described below.

Triangulation: Cell-based simple triangulation sim-ilar to the marching cubes algorithm proposed byLORENSEN [LC87] and variants [LLVT03] have beenimplemented. The main drawbacks experienced so farare the immense number of triangles generated andproblems with sampling in areas where sharp edgesare expected.The power crust algorithm (see [ACK01]) calculatesa triangulated boundary representation of the inputdataset, which is a structured or unstructured pointcloud. The premises for good results are a dense pointcloud (this criterion is generally fulfilled) and a smoothobject surface (due to the origin of the data, a weakcriterion which is not fulfilled every time). Moreover,the standard power crust algorithm is unable to handlelarge quantities of input data.Triangulation may also be based on proximate orquite proximate contour lines in an image stack, e.g.[MBMB06] with an implicit curve fitting approach,or [Kep75] who applies a 2-D directed graph searchproblem as isomorphic problem statement to solve theproblem of 3-D triangulation of subsequent contourlines.

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 8: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Surface reduction and remeshing: The 3-D sur-face data of plant biological models may consist of avery large number of small triangles (80 million tri-angles or more per model). The surface reconstruc-tion algorithms (e.g., marching cubes) tend to resultin oversampled representations in most regions of thetriangulated structure. Eliminating extraneous datanecessitates a surface reduction as in [SZL92] and[GH97]. Figure 10 presents a grain model with differ-ent mesh resolutions. Figure 10 (b) contains 16% ofthe number of triangles in Figure 10 (a). A remeshingstep can be advantageous for a well distributed surfacerepresentation especially if the surface data is used forsimulation purposes (see Section 4.2).

(a)

(b)

Figure 10: Surface models of one grain at different res-olutions (a) left, a grain with a mesh size of 250,000triangles, and right, detail with highlighted faces (b)left, reduced mesh with approximately 40,000 high-lighted faces, and right, detail with highlighted faces.

Surface smoothing: In [BHP06] a survey of meshsmoothing algorithms applied to data for medical ap-plications, concerning efficiency, accuracy, and appro-priateness of different smoothing strategies for surfacemodels was presented. No such work has been done onplant biological data, which has its own idiosyncrasiesfrom imaging, segmentation and surface extraction. Atthe moment, research is being done on handling stair-cases, i.e., small shifts caused by the rigid registrationprocess, and oversegmented areas. Comparable sur-veys of surface smoothing can be found in [BO03] and[YOB02]. Figure 11 reproduces an automatically seg-mented grain model with triangulation, triangle reduc-tion and remeshing shown from different perspectives.

Figure 11: A surface representation of multiple la-bel isovalues is obtained by automatically segmentinghistological section datasets into biologically relevanttissues. The extracted isosurfaces are smoothed andremeshed.

4 Application

Figure 12 highlights the major tasks of the applica-tion presented here. These include aspects of primaryvisualization, virtual extraction of volume with a de-

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 9: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

formation model as well as dissection and subsequentanalysis of extracted physical tissue.

4.1 Visualization of Biological Data

The main purpose of the visualization is to representthe acquired data in a way that a user understandsintuitively [MJM+06]. The following two examplesare representative for the multitude of visualizationsof plant biological models. With the aid of VR tech-nologies, these concrete examples produce an obviousvalue added for biologists.

Visualization of different data domains: One ofthe most import tasks for biologists is the compari-son of data from different data domains. An applica-ble visualization can in this case provide added value,as shown in the following example. Using 4-D warp-ing algorithms, as described in [PSM+08], virtual in-termediate developmental stages can be generated toachieve a better congruence of data resulting from dif-ferent individual grains. Figure 13 combines an NMRvolume representation and a surface model represen-tation of histological data.

Visualization of model differences: Since graingrowth and size are subjects to inherent biological di-versity, models representing morphological variancesat a specific developmental stage are highly feasi-ble, especially to integrate heterogeneous data sam-pled from different individual grains. [BWSS09] ana-lyzes variances and features common among individ-ual grains at a specific point in time and quantifiesthem by compiling inter-individual stochastic atlasesof grains, thus enabling the quantification and visual-ization of biodiversity in a specimen.

4.2 Virtual Extraction

Once the user has visualized the desired dataset, anappropriate interface must be provided to define thevolume of interest (VOI). This can be achieved by ini-tially choosing a prototype from a model catalog orselecting the structure to be modified. The user canthen interact with the 3-D model. An efficient andmore intuitive way of doing this is in 3-D space with a3-D interface (for example with the SpaceNavigatorwww.3dconnexion.de). Figure 14 presents the2-D standard case of deformation feature use. Evenwith new innovative metaphors (see [SSB08]), a 2-D

Figure 13: Visualization of different data modalities:NMR-based voxel intensities are displayed by volumerendering and combined with a surface view of labeledhistological data.

view of the model modified by 3-D widgets with a 2-Dmouse interface is in general not as intuitive and accu-rate as a direct 3-D modification, especially for begin-ners. The current deformation area is chosen throughfreehand region selection (OpenGL picking, see thered points in Figure 14). Mass-spring models, as de-scribed in [Wit97] and [BC00], are used for deforma-tion purposes. Well distributed intraconnectivity anduniformly distributed triangles on the surface (see thewireframe model in Figure 14) ensure that the mass-spring model’s behavior is stable and visually plau-sible [LSH07]. Good intraconnectivity can, for ex-ample, be achieved by tetrahedization, as describedin [AMSW08]. Given the extremely high complexityof these 3-D plant models the deformation model wasalso realized based on extensive GPU support.The VOI can be saved later (e.g. as a VRML-file) andused for the dissection.

4.3 Dissection and Analysis

The VOI is extracted from physical objects with atechnique patented by an industrial machinery man-

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 10: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Figure 12: Application workflow: Model reconstruction and VR-based virtual extraction close the circuit fromreal-world modeling to the acquisition of new data by physical dissection.

Figure 14: Modification of model data to determinethe VOI.

ufacturer named Molecular Machines & Industrieswww.molecular-machines.com. This applica-tion will not be discussed further here.Microdissection experiments allow quasi spatially lo-calized assays based on small sample volumes dis-sected from a specimen, thus providing a “snapshot”of the differently expressed genes and their regula-tion during development. The authors of [TWS+08]present a study of gene expression patterns in tissue-specific assays of developing barley caryopses utiliz-ing 2-D-LMD preparation of fixated serial sections.Being tissue-specific, the results of the LMD assayscan be reintegrated into the visualization process (seeFigure 12) and integrated into other data modalities.

5 Application Scenarios and Results

The following application scenarios, where VR tech-niques could be applied, were identified in the differ-ent stages of biologists’ work. This list below containsa selection of scenarios for VR use in different stagesof the pipeline.

1. Segmentation postprocessing

2. Collaborative data exploration andmodel validation

3. Definition of the virtual model’s VOI

4. Biological models for marketing

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 11: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Segmentation Postprocessing: Biological data seg-mentation is a multistage process with a classifica-tion accuracy of at least 90% when performed withthe approach presented here. Optical revision notonly requires incorporating the spatial information ofthe 2-D image, but also the visualization of the re-constructed 3-D model itself. Semi-immersive sin-gle user VR systems such as passive stereo with In-fitec glasses www.infitec.net constitute a goodmethod. Even better are autostereoscopic 3-D displayssuch as www.spatialview.com, which are wellsuited for the technicians’ revision processes. Fig-ure 15 shows an example of VR in use.

Figure 15: Segmentation postprocessing on an eye-tracked single user SeeFront 21” 3-D display from spa-tial view.

Collaborative Data Exploration and Model Valida-tion: A major part of the proposed pipeline is the in-formation exchange between different working groupsin present working stages. Biological VR contents canbe processed quite easily for collaborative work in thelarge-scale immersive environment of the “Elbe Dom”described in [SMM+08] or in more technical detail in[SMH+07]. It is an excellent platform for combiningdifferent data domains in a visualization or linking an-notations (text or files) to models. Figure 16 presentsan example.

Definition of the Virtual Model’s VOI: Computer-aided definition of 3-D volumes of interest is anotherscenario that was identified for non-conventional bio-logical use. Biologists have the opportunity to definea virtual subset of a plant model which corresponds tothe structure of a real life object of interest to them.Figure 17 shows an example of model deformation ona 3-D display.

Figure 16: Collaborative review of new 3-D grainmodels and their components in a multi-user immer-sive environment (see [SMM+08]).

Figure 17: Definition of 3-D VOI on a SeeReal Tech-nologies 3-D display.

Biological Models for Marketing: Being able topresent current work to external audiences is of im-portance to most enterprises or organizations. A 3-Ddemonstration of the outcome of current work is al-ways preferable to a conventional presentation andleaves a far more intense impression on the viewers.

6 Experimental Evaluation

An experiment was conducted to evaluate the perfor-mance of selected VR techniques in the field of plantbiological research.Subjects were split into two groups and required toperform three different tasks, a Navigation Task toevaluate navigation skills in a 3-D environment (speedand accuracy), an Exploration Task to analyze the

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 12: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

speed at which subjects are able to locate regions ofinterest in a 3-D model and, finally, a ManipulationTask to identify suitable forms of presentation.

6.1 Experimental Setup

Hardware Setup: The new SpacePilot Pro from3Dconnexion was employed in the experiments to per-form 6 DOF interactions (see Figure 18). A standardlaser mouse from Microsoft served as 2-D input device(see Figure 18). A low-cost tracking system with anIR sensor (see www.nintendo.com) and IR LEDs(reflection angle larger than 70 degrees) was used forhead tracking. The LEDs are mounted on (stereo)glasses. Figure 18 presents the setup with standardglasses and IR LEDs.

Figure 18: The different interaction devices: (left)SpacePilot Pro, (middle) standard mouse, and (right)head tracking.

Software Setup: In order to obtain results that lendthemselves to comparison, a virtual trackball naviga-tion metaphor (see [Hul90]) was applied to the headtracking navigation as well as the 2-D and the 3-Dmouse navigation. Table 1 provides a brief overviewof the implemented navigation.

Subjects: The experiments were conducted withtwo groups. The first group consisted of ten subjects(two female, eight male, researchers and universitystudents) without any special background in biology.They all had moderate or substantial experience with3-D techniques. The second group consisted of fivesubjects (two female, three male, all researchers)with backgrounds in biology. They all had little or

SpacePilot ProSpin Left roll cap leftSpin Right roll cap rightSpin Up tilt cap forwardSpin Down tilt cap backwardZoom In push capZoom Out pull capRoll Left spin cap leftRoll Right spin cap right

2-D mouseSpin Left LeftMouseButton + drag leftSpin Right LeftMouseButton + drag rightSpin Up LeftMouseButton + drag forwardSpin Down LeftMouseButton + drag backwardZoom In RightMouseButton + drag forwardZoom Out RightMouseButton + drag backwardRoll Left MiddleMouseButton + drag leftRoll Right MiddleMouseButton + drag right

Head trackingSpin Left move head leftSpin Right move head rightSpin Up move head upSpin Down move head downZoom In move head forwardZoom Out move head backwardRoll Left tilt head leftRoll Right tilt head right

Table 1: Navigation control.

moderate experiences with 3-D techniques. techniquewas chosen randomly.

6.2 Navigation Task

In the experimental tasks the users employ the dif-ferent techniques / controllers to approximately adjusttheir position, orientation and direction (equivalent toa virtual camera) to a set of specified positions (withina predefined radius) and / -or orientations (scalar prod-uct above a predefined threshold). A common navi-gation metaphor is applied for this task (see Table 1).The Navigation Task represents a typical and impor-tant function in the application.

Initial Conditions: Before a complex navigationtest could start, the spin, zoom, and roll had to beevaluated separately to verify the subjects would ob-tain comparable results with the different controllers

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 13: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

for each navigation metaphor subset (e.g., spin). Forinstance the zoom and roll was disabled for the spinevaluation. The sensitivity of the controlling deviceswas adjusted once before the start of testing. To en-sure conditions were equal, no acceleration parameterswere used (e.g. for the 3-D mouse). Ten users fromboth groups participated in the initial testing. Sincethe differences remained within an expected range (seeFigure 25 in the Appendix), the subsequent naviga-tion and exploration tests delivered results that are con-ducive to sound.

Procedure: Each subject was required to solve a setof eight subtasks twice by employing each of the threetechniques (3-D mouse, mouse and head tracking).The sequence of controlling devices employed was al-tered randomly and every task had to be finished be-fore the next controller was used. Before the start ofa test, each controller’s mode of action was explainedand the subjects were given a brief preparation time(approximately two minutes) to familiarize themselveswith it. Completion times and accuracy values, i.e.the average between the position accuracy (spin andzoom) and the roll accuracy, were measured for eachsubtask.

Experimental Results from the First Group: Thefirst group’s results of the complex navigation task(combination of spin, zoom, and roll) confirm the ex-perimental results yielded by the initial navigation ex-periment. Subjects performed the complex task bestwith the mouse. The subjects’ mean completion timeusing the mouse technique (derived from the aver-aged times of the subtasks) is half the mean comple-tion time using the 3-D mouse (SpacePilot Pro) orthe head tracking system. The standard deviation ofthe mean completion time for a subtask (2-D mouse)was far smaller than the standard deviation from the3-D mouse but comparable to the standard deviationof head tracking. The accuracy of the tasks performedwas best with the head tracking system but did not dif-fer very much from the accuracy with the 3-D mouse.The accuracy with the 2-D mouse was worst (approxi-mately 8–10 % less accurate than 3-D mouse and headtracking). The results with the different techniqueswere as good as the results of the separate naviga-tion analysis. The 2-D mouse is the input device fornearly every 2-D and 3-D task. Given their familiar-ity with this device, the subjects were able to completethe task more quickly as the results in Figure 19 indi-

Complex Navigation Task - First Group

0.5

0.6

0.7

0.8

0.9

1

0 10 20 30 40 50

average completion time for a subtask (sec)

average accuracy for a subtask (%) .

3-D mouse mouse head tracking

3-D mouse avg mouse avg head tracking avg

Figure 19: Results of the complex Navigation Taskfrom the first group, which included subjects withoutany background in biology.

cate. The simple virtual trackball metaphor supportsthis and allows fast and accurate navigation. On theother hand, fine tuning appeared to be more difficult(in relation to accuracy). Alternatively, the subjectsmay have achieved faster completion times at the ex-pense of the accuracy. Since not all subjects changedtheir strategy at once to be faster or more accurate thismay not be true. The standard deviation of the comple-tion times was significantly smaller with head trackingthan with the 3-D mouse. Obviously, head tracking iseasy (or difficult) to use without prior experience. Thesimple head tracking concept is conducive to intuitiveuse head tracking. This hypothesis was corroboratedby subjects’ responses to an informal questionnaire.

Experimental Results from the Second Group:The results from the second group are correspond withthose from the first group. The subjects were also ableto complete the task fastest with the mouse (twice asfast as with the reference technique) but at the expenseof accuracy. The standard deviations of all techniqueswere far higher than in the reference group. The re-sults presented in Figure 20 point to the same conclu-sion as the results from group one, taking into accountthat the pool of subjects is too small for statistical con-clusions. The nearly constant navigation times withgood accuracy values for head tracking in this group

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 14: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Complex Navigation Task - Second Group

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25 30 35 40

average completion time for a subtask (sec)

average accuracy for a subtask (%)

3-D mouse mouse head tracking

3-D mouse avg mouse avg head tracking avg

Figure 20: Results of the complex Navigation Taskfrom the second group, which included subjects withbackgrounds in biology.

too reinforces the preceding assumption.

6.3 Exploration Task

In a second evaluation scenario, concealed areas of in-terest in a typical 3-D plant biology were analyzed.Since the regions are represented by glyphs, subjectswithout a biology background are also able to localizethem. The glyphs are only visible when the subject’sposition is located within a predefined range to theglyph, which is similar to a small detail of a model thatis only recognizable up close. The alpha values de-creases when the subjects’ virtual position convergeson the glyph. The roll is not relevant for this task.

Procedure: Each subject was required to solve anexploration task consisting of a set of twelve sub-tasks by employing each of the three techniques (3-Dmouse, mouse and head tracking). The sequence ofcontrol devices was altered randomly just as the posi-tions of the glyphs in the model. Every task had to becompleted before the next controller was used. Thisexperimental task built upon the Navigation Task. Be-fore the start of a test, each controller’s mode of ac-tion was explained and the subjects were given a briefpreparation time (approximately one minute) to famil-iarize themselves with it. The completion times ofeach subtask were measured for this task.

Experimental Results from the First Group: 2-Dmouse interaction was the fastest technique to com-

plete the exploration tasks. Nine out of ten subjectsperformed best with the 2-D mouse, and the finalsubject (see Figure 21, user #4) was nearly just asfast with the 2-D mouse as with the other techniques.On average, users performed 50 % faster with 2-Dmouse interaction than with the 3-D mouse and headtracking. On average, subjects performed similarlywith head tracking and the 3-D mouse. The headtracking results were only a little faster. Seven out often participants were faster or nearly as fast with headtracking interaction than with the 3-D mouse. Thestandard deviations of the averaged completion timesof the subtasks were very high for the 3-D mouse aswell as for the head tracking approach. Furthermore,timing differed among different users greatly. Again,

Exploration Task - First Group

0

10

20

30

40

50

60

70

80

1 2 3 4 5 6 7 8 9 10 avguser id

average completion time for a subtask (sec)

3-D mouse mouse head tracking

Figure 21: Experimental results of the first group forthe Exploration Task.

subjects displayed good results with the 2-D mousefor this task. The relative time differences of thetested devices do not vary as greatly as the averagetime differences of the subtasks obtained duringthe Navigation Task. The variations in the standarddeviation were large because not all glyphs can befound with the same speed.

Experimental Results from the Second Group:The subjects achieved the best results with the 2-D mouse interaction technique (see Figure 22). Onaverage, users performed approximately 66 % faster

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 15: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

with 2-D mouse interaction than with the 3-D mouse,and only 15 % faster than with head tracking. Fourout of five subjects were faster or nearly as fast withhead tracking interaction than with the 3-D mouse.The standard deviations of the averaged completiontimes of the subtasks were higher than in the referencegroup. Furthermore, the timing differed among dif-ferent users greatly. The second group also displayed

Exploration Task - Second Group

0

10

20

30

40

50

60

70

80

90

100

1 2 3 4 5 avguser id

average completion tim

e for a subtask (sec)

3-D mouse mouse head tracking

Figure 22: Results of the Exploration Task from thesecond group.

good results using the 2-D mouse for this task. Thedifference was not much greater than the timing dif-ferences of the averaged subtasks performed duringthe Navigation Task and the first group’s timing. Thetiming for head tracking differed only marginally fromthe head tracking timing of the group with more 3-Dskills.

6.4 Manipulation Task

The last test scenario is intended to evaluate theinfluence of the visual representation (mono-scopic / stereoscopic / shadow indication / stereoscopicwith shadow indication) on the accuracy of a manip-ulation task being performed. This is an elementarytask in the definition process of VOIs. The subjectswere required to move a 3-D arrow in a precise direc-tion indicating the main direction of the subsequentdeformation (cf. the same principle in Figure 14). Thestereo method utilized throughout the experiment wascircular polarization (see [Hof05]). Shadow indication

was implemented by means of hard shadows (see[Wil78]). The biologists defined an average variationof 5–10 degrees from the optimal orientation as theaccuracy requirement for this task. The goal was toidentify the forms of representation that meet thisrequirement.

Procedure: Each subject was required to solve themanipulation task consisting of a set of eight random-ized subtasks by employing each of the representa-tions (monoscopic, stereoscopic, shadow, and stereowith shadow). The sequence of forms of representa-tion was altered randomly. Before the start of a test,each subject achieved a short briefing on the task to beperformed (approximately one minute). The accuracywas measured for this task (angular deviation betweentarget direction and adjusted direction).

Experimental Results from the First Group: Asexpected, the accuracy values of the stereoscopic ap-proach were better than the monoscopic approach, butthe differences between both approaches are not verysignificant (two degrees on average). The accuracy re-sults of shadow indication were very good (on averagetwice as good as the monoscopic and stereoscopic ap-proach). A combination of stereo vision and shadowindication produced better accuracy when the manip-ulation arrow was adjusted. The different users’ ac-curacy values were virtually identical. The results ofthe Manipulation Task for the first group are presentedin Figure 23. The difference between the optimal ma-nipulation direction and the adjusted direction is usedto determine accuracy. Smaller angle values representbetter accuracy. The last column avg stands for theaverage values of all participants of the group.

Experimental Results from the Second Group:The second group’s results are comparable to the firstgroup’s. The stereoscopic approach was more impor-tant to the second group (difference between stereoand mono 12 degrees) than to the first group (differ-ence between stereo and mono 2 degrees). The use ofshadows to adjust the manipulation angle (7 degreeson average) is twice as effective as the stereoscopic ap-proach and four times as effective as the monoscopicapproach. The use of stereo techniques in addition toshadow indication produced higher accuracy amongall the subjects in the second group. The averagedifference of one degree between the two approaches

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 16: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

30

35

40

45

(deg

ree)

Manipulation Task - First Group

mono

shadow

stereo

stereo+shadow

5

10

15

20

25

aver

age

accu

racy

for a

sub

task

01 2 3 4 5 6 7 8 9 10 avg

user id

Figure 23: Evaluation results of the Manipulation Taskfor the first group.

was marginal. On the other hand, the standard devi-ation with stereo and shadow is smaller. The resultsof the Manipulation Task from the second group arepresented in Figure 24.

6.5 Experimental Discussion

After the user experiments, the subjects were asked fortheir subjective impression. Both groups (biologistsand non-biologists) preferred the interaction techniqueof the standard mouse for the Navigation Task and theExploration Task. Head tracking was rated far bet-ter than the 3-D mouse. All the subjects, includingthose with slow completion times, saw great potentialin this approach. Head tracking was rated as the tech-nique best suited for spin navigation. The mouse wasrated best for the navigation components zoom androll. However, the measured times for a task for onesubject (intra-rater) and the average times among thesubjects (inter-rater) varied greatly. The mapping ofthe navigation metaphor demonstrated that a standard

35

40

45

50

55

60

(deg

ree)

Manipulation Task - Second Group

mono

shadow

stereo

stereo+shadow

5

10

15

20

25

30

aver

age

accu

racy

for a

sub

task

01 2 3 4 5 avg

user id

Figure 24: Evaluation results of the Manipulation Taskfor the second group.

device such as the mouse can perform better than thesix-degree of freedom approaches evaluated. Indeed,this is not always possible because of the inherent lim-itations of two dimensions on the mouse.The influence of stereoscopic viewing on the adjust-ment of the manipulation angle was more importantfor the second group (participants with little or no3-D experience) than for the experienced participants.The first group’s average deviation accuracy was 15degrees with the monoscopic approach. The secondgroup’s was on 27 degrees. Almost all subjects in thefirst group reported that they automatically used theshading and silhouette information (cf. importance ofdepth cues [WFG92]). This might be an indication thatthe results of monoscopic and stereoscopic accuracydid not differ very much in this group.A few participants reported that they did not use thearrow’s shadow as indirect 3-D information. Thismight be an explanation for these subjects’ compara-tively poor accuracy values. The application of stereotechniques and shadow indication was demonstrated

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 17: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

to meet biologists’ requirements for this task. Noneof the subjects in the second group was able to fulfillthe predefined accuracy requirement by using a stereorepresentation alone. Finally, while the application ofstereo techniques demonstrably improves the adjust-ment of the manipulation angle, the shadow applica-tion with stereoscopic support does so much better andreliably.

7 Conclusion and Future Directions

Once use scenarios had been identified, VR is now be-ing used in process applications. Important steps havebeen taken toward the interactive analysis, visualiza-tion, and exploration of different data domains, espe-cially for polygonal representation forms. Suitable VRtechniques that provide biologists with added value ex-ist for different tasks in the model generation, visual-ization, exploration and analysis pipeline.With its 360 degree laser projection system and sup-port of collaborative interaction, the Elbe Dom im-pressively exemplifies the potentials of immersive VR.Nevertheless, desktop VR (e.g., with passive stereo)and semi-immersive VR applications are just as goodfor single user tasks.The application scenarios presented here will have tobe evaluated and expanded in the future. There ismuch need for research, especially in the field of ef-ficient user input.

Acknowledgements

This research is being supported by the BMBF grantsPTJ-BIO/0313821B and PTJ-BIO/0313821A. The au-thors would like to thank Rainer Pielot (IPK Gater-sleben, Plant Bioinformatics Group) for providingNMR data and Uta Siebert and Birgitt Zeike (IPKGatersleben, Seed Development Group) for the fruit-ful discussions and sample handling and digitizing.

Appendix

0,9

1

a su

btas

k (%

) .

Spin (exclusively)

0,6

0,7

0,8

0 5 10 15 20

aver

age

accu

racy

for

average completion time for a subtask (sec)(a)

0,6

0,7

0,8

0,9

1

0 5 10 15

ave

rage

accu

racy f

or

a s

ub

task (

%)

average completion time for a subtask (sec)

Zoom (exclusively)

(b)

0,8

0,9

1

0 5 10 15

ave

rage

accu

racy f

or

a s

ub

task (

%)

average completion time for a subtask (sec)

Roll (exclusively)

(c)

0,8

0,9

1

a su

btas

k (%

) .

Complex Navigation Task

0,5

0,6

0,7

0 10 20 30 40 50

aver

age

accu

racy

for a

average completion time for a subtask (sec)

3-D mouse mouse head tracking3-D mouse avg mouse avg head tracking avg

Figure 25: Initial evaluation of the accuracy and speedfor the independent navigation components: (a) spin,(b) zoom, and (c) roll.

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 18: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

References

[ACK01] N. Amenta, S. Choi, and R. K. Kolluri.The power crust. In SMA ’01: Proceed-ings of the sixth ACM symposium onSolid modeling and applications, pages249–266. ACM, New York, NY, USA,2001. ISBN 1-58113-366-9.

[AMSW08] S. Adler, R. Mecke, M. Schenk, andC. Wex. Hashbasierte Zerlegung vonTetraedernetzen. In D. Bratz, S. Bohn,and J. Hoffmann, editors, curac.08Tagungsband: 7. Jahrestagung derDeutschen Gesellschaft fur Computer –und Roboterassistierte Chirurgie e.V.,pages 203–204, 2008. ISBN 978-3-00-025798-8.

[Anh03] J. Anhøj. Generic Design of Web-BasedClinical Databases. Journal of MedicalInternet Research, 5(4):e27, Nov 2003.ISSN 1438-8871.

[BC00] D. Bourguignon and M.-P. Cani. Con-trolling Anisotropy in Mass-Spring Sys-tems. In Computer Animation and Sim-ulation 2000, pages 113–123. Springer-Verlag, 2000. ISBN 3-211-83392-7.

[Ber04] M. Bernd. Konzepte fur den Einsatz vonVirtueller und Erweiterter Realitat zurinteraktiven Wissensvermittlung. PhDthesis, Technische Universitat Darm-stadt, Fachbereich Informatik, 2004.

[BHP06] R. Bade, J. Haase, and B. Preim. Com-parison of Fundamental Mesh Smooth-ing Algorithms for Medical SurfaceModels. In SimVis, pages 289–304,2006. ISBN 3-936150-46-X.

[BL95] F. Biocca and M. R. Levy. Communica-tion in the age of virtual reality, chap-ter Communication applications of vir-tual reality, pages 127–157. LawrenceErlbaum Associates, Inc., Mahwah, NJ,USA, 1995. ISBN 0-8058-1550-3.

[BMH00] S. Bougourd, J. Marrison, andJ. Haseloff. An aniline blue stain-ing procedure for confocal microscopy

and 3D imaging of normal and per-turbed cellular phenotypes in matureArabidopsisembryos. The Plant Jour-nal, 24(4):543–550, 2000. ISSN0960-7412.

[BO03] A. Belyaev and Y. Ohtake. A com-parison of mesh smoothing methods.In In Proceedings of the Israel-KoreaBi-National Conference on Geomet-ric Modeling and Computer Graphics,pages 83–87. Y. O. A. & Computer,2003.

[Bra65] R. N. Bracewell. The Fourier Trans-form and Its Applications. McGraw-HillPublishing Company, New York, 1965.ISBN 9780070070127.

[Bro92] L. Gottesfeld Brown. A survey of im-age registration techniques. ACM Com-put. Surv., 24(4):325–376, 1992. ISSN0360-0300.

[BS08] F. Bollenbeck and U. Seiffert. Fastregistration-based automatic segmenta-tion of serial section images for high-resolution 3-D plant seed modeling. InISBI 2008. 5th IEEE International Sym-posium on Biomedicial Imaging: FromNano to Macro, 2008, pages 352–355.IEEE, 2008. ISBN 978-1-4244-2002-5.

[BSC06] G. Bilodeau, Y. Shu, and F. Cheriet.Multistage graph-based segmentation ofthoracoscopic images. ComputerizedMedical Imaging and Graphics, 30(8):437–46, 2006. ISSN 0895-6111.

[BSS06] C. Bruß, M. Strickert, and U. Seif-fert. Towards Automatic Segmenta-tion of Serial High-Resolution Images.In Proceedings Workshop Bildverar-beitung fur die Medizin, pages 126–130,2006.

[BWSS09] F. Bollenbeck, D. Weier, W. Schoor,and U. Seiffert. From Individual In-tensity Voxel Data to Inter-IndividualProbabilistic Atlases of Biological Ob-jects by an Interleaved Registration-Segmentation Approach. In A. K. Ran-chordas and H. Araujo, editors, Pro-

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 19: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

ceedings of the 4th International Con-ference on Computer Vision and Theoryand Applications, volume 1, pages 125–129, February 2009. ISBN 978-989-8111-69-2.

[CKS97] V. Caselles, R. Kimmel, and G. Sapiro.Geodesic Active Contours. Interna-tional Journal of Computer Vision, 22(1):61–79, 1997. ISSN 0920-5691.

[CR06] O. Cakmakci and J. Rolland. Head-worn displays: a review. Journal of Dis-play Technology, 2(3):199–216, 2006.ISSN 1551-319X.

[DHL+98] Oliver Deussen, Pat Hanrahan, BerndLintermannn, Radomir Mech, MattPharr, and Przemyslaw Prusinkiewicz.Realistic Modeling and Rendering ofPlant Ecosystems. In Proceedings ofthe 25th annual conference on Com-puter graphics and interactive tech-niques, pages 275–286, 1998. ISBN 0-89791-999-8.

[Far88] G. Farin. Curves and surfaces for com-puter aided geometric design: a practi-cal guide. Academic Press Professional,Inc., San Diego, CA, USA, 1988. ISBN0-12-249050-9.

[FUS+98] A. X. Falcao, J. K. Udupa, S. Sama-rasekera, S. Sharma, B. E. Hirsch, andR. de Alencar Lotufo. User-steered im-age segmentation paradigms: live wireand live lane. Graphical Models ImageProcessing, 60(4):233–260, 1998. ISSN1077-3169.

[GDB+07] S. Gubatz, V. J. Dercksen, C. Bruß,W. Weschke, and U. Wobus. Analysisof barley (Hordeum vulgare) grain de-velopment using three-dimensional dig-ital models. The Plant Journal, 52(4):779–790, 2007. ISSN 0960-7412.

[GH97] M. Garland and P. S. Heckbert. Sur-face simplification using quadric errormetrics. In Proceedings of SIGGRAPH,pages 209–216. Los Angeles, 1997.ISBN 0-89791-896-7.

[Gli06] S. M. Glidewell. NMR imaging of de-veloping barley grains. Journal of Ce-real Science, 43(1):70–78, 2006. ISSN0733-5210.

[Hah05] H. K. Hahn. Morphological Volumetry.PhD thesis, Universitat Bremen, 2005.

[HHF07] A. Hopp, S. Havemann, and D. W. Fell-ner. A Single Chip DLP Projectorfor Stereo-scopic Images of High ColorQuality and Resolution. In In VirtualEnvironments 2007 - IPT-EGVE 2007 -Short Papers and Posters, pages 11–26,2007.

[Hof05] H. Hoffmann. VR Concepts and Tech-nologies. Technical report, Informa-tion Society Technologies (IST), Intu-ition Virtual Reality, 2005.

[HS85] R. M. Haralick and L. G. Shapiro. Im-age Segmentation Techniques. Com-puter Vision, Graphics, and Image Pro-cessing, 29(1):100–132, 1985. ISSN0734-189X.

[Hul90] J. Hultquist. Graphics gems, chap-ter A virtual trackball, pages 462–463.Academic Press Professional, Inc., SanDiego, CA, USA, 1990. ISBN 0-12-286169-5.

[ISNC03] L. Ibanez, W. Schroeder, L. Ng, andJ. Cates. The ITK Software Guide. Kit-ware, Inc., 1 edition, 2003.

[JLM+08] K. Jaqaman, D. Loerke, M. Met-tlen, H. Kuwata, S. Grinstein, S. L.Schmid, and G. Danuser. Robust single-particle tracking in live-cell time-lapsesequences. Nature Methods, 5(8):695–702, 2008. ISSN 1548-7091.

[Kan05] H. W. Kang. G-wire: A livewire seg-mentation algorithm based on a general-ized graph formulation. Pattern Recog-nition Letters, 26(13):2042–2051, 2005.ISSN 0167-8655.

[Kep75] E. Keppel. Approximating ComplexSurfaces by Triangulation of ContourLines. IBM Journal of Research and

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 20: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

Development, 19(1):2–11, 1975. ISSN0018-8646.

[KHKB07] A. Kuß, H.-C. Hege, S. Krofczik, andJ. Borner. Pipeline for the Creation ofSurface-based Averaged Brain Atlases.In Proc. of WSCG 2007 (full papers),15th Int. Conf. in Central Europe onComputer Graphics, Visualization andComputer Vision, Plzen, Czech Repub-lic, volume 1, pages 17–24, 2007. ISBN978-80-86943-98-5.

[Lan07] E. Lantz. A survey of large-scale im-mersive displays. In C. Cruz-Neira andD. Reiners, editors, EDT ’07: Proceed-ings of the 2007 workshop on Emergingdisplays technologies, pages 1–7. ACM,New York, NY, USA, 2007. ISBN 978-1-59593-669-1.

[LC87] W. E. Lorensen and H. E. Cline. March-ing cubes: A high resolution 3D surfaceconstruction algorithm. In SIGGRAPH’87: Proceedings of the 14th annualconference on Computer graphics andinteractive techniques, pages 163–169.ACM Press, New York, NY, USA, 1987.ISBN 0-89791-227-6.

[LD98] Bernd Lintermann and Oliver Deussen.A Modelling Method and User Interfacefor Creating Plants. Comput. Graph. Fo-rum, 17(1):73–82, 1998. ISSN 0167-7055.

[LLVT03] T. Lewiner, H. Lopes, A. W. Vieira, andG. Tavares. Efficient Implementationof Marching Cubes’ Cases with Topo-logicalGuarantees. Journal of Graph-ics Tools, 8(2):1–15, 2003. ISSN 1086-7651.

[LM01] L. Lucchese and S. K. Mitra. Colorimage segmentation: a state-of-the-artsurvey. In Proceedings of the IndianNational Science Academy, volume 67,pages 207–222, 2001. ISSN 0370-0046.

[LSH07] B. Lloyd, G. Szekely, and M. Hard-ers. Identification of Spring Parame-ters for Deformable Object Simulation.IEEE Transactions on Visualization and

Computer Graphics, 13(5):1081–1094,2007. ISSN 1077-2626.

[MB98] E. N. Mortensen and W. A. Barrett. In-teractive Segmentation with IntelligentScissors. Graphical Models and ImageProcessing, 60(5):349–384, September1998.

[MBMB06] J. Marker, I. Braude, K. Museth, andD. Breen. Contour-based surface re-construction using implicit curve fitting,and distance field filtering and interpola-tion. In Proceedings of the InternationalWorkshop on Volume Graphics, pages95–102, 2006.

[MBST06] R. Mecke, D. Berndt, W. Schoor, andE. Trostmann. Generation of texturized3-D models with high resolution usingoptical 3-D metrology. In Proceedingsof the 7th Conference on Optical 3DMeasurement Techniques: Applicationsin GIS, mapping, manufacturing, qual-ity control, robotics, navigation, mobilemapping, medical imaging, animation,VR generation, volume II, pages 3–12,2006. ISBN 3-9501492-2-8.

[MJM+06] T. Munzner, C. Johnson, R. Moorhead,H. Pfister, P. Rheingans, and T. S. Yoo.NIH-NSF Visualization Research Chal-lenges Report Summary. IEEE Com-puter Graphics and Applications, 26(2):20–24, 2006. ISSN 0272-1716.

[Mod04] J. Modersitzki. Numerical Methods forImage Registration. Oxford UniversityPress, 2004. ISBN 0198528418.

[MV98] J.B. A. Maintz and M. A. Viergever.A survey of medical image registra-tion. Medical Image Analysis, 2(1):1–36, 1998. ISSN 1361-8415.

[PH04] W. Pereanu and V. Hartenstein. Digitalthree-dimensional models of Drosophiladevelopment. Current Opinion in Ge-netics & Development, 14(4):382–391,2004. ISSN 0959-437X.

[PSM+08] R. Pielot, U. Seiffert, B. Manz,D. Weier, F. Volke, and W. Weschke.

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 21: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

4D Warping for Analysing Morpholog-ical Changes in Seed Development ofBarley Grains. In A. K. Ranchordas andH. Araujo, editors, VISAPP (1), pages335–340. INSTICC - Institute for Sys-tems and Technologies of Information,Control and Communication, 2008.ISBN 978-989-8111-21-0.

[RBB+94] M. Ringwald, R. A. Baldock, J. Bard,M. H. Kaufman, J. T. Eppig, J. E.Richardson, J. H. Nadeau, andD. Davidson. A Database for MouseDevelopment. Science, 265(5181):2033–2034, 1994. ISSN 0036-8075.

[SBH+08] W. Schoor, F. Bollenbeck, M. Hofmann,R. Mecke, U. Seiffert, and B. Preim.Automatic Zoom and Pseudo Hapticsto Support Semiautomatic Segmenta-tion Tasks. In V. Skala, editor, 16thWSCG 2008, WSCG 2008 Full PapersProceedings, pages 81–88. WSCG, Uni-versity of West Bohemia, 2008. ISBN978-80-86943-15-2.

[SHC04] M. S. Su, W. L. Hwang, and K. Y.Cheng. Analysis on multiresolution mo-saic images. IEEE Transactions on Im-age Processing, 13(7):952–959, 2004.ISSN 1057-7149.

[SMH+07] W. Schoor, S. Masik, M. Hofmann,R. Mecke, and G. Muller. eLBEDoM:360 Degree Full Immersive Laser Pro-jection System. In Virtual Environments2007 - IPT-EGVE 2007 - Short Papersand Posters, pages 15–20, 2007. ISBN978-3-905673-64-7. ISSN 1727-530X.

[SMM+08] W. Schoor, S. Masik, R. Mecke, U. Seif-fert, and M. Schenk. VR Based Visual-ization and Exploration of Barley GrainModels with the Immersive Laser Pro-jection System - Elbe Dom. In S. Richirand E. Klinger, editors, 10th Virtual Re-ality International Conference, Laval,France, pages 217–224, apr 2008. ISBN2951573073.

[SSB08] R. Schmidt, K. Singh, and R. Balakrish-nan. Sketching and Composing Widgets

for 3D Manipulation. Computer Graph-ics Forum, 27(2):301–310, 2008. ISSN0167-7055. Proceedings of Eurograph-ics 2008.

[SZL92] W. J. Schroeder, J. A. Zarge, andW. E. Lorensen. Decimation of trian-gle meshes. In SIGGRAPH ’92: Pro-ceedings of the 19th annual conferenceon Computer graphics and interactivetechniques, pages 65–70. ACM, NewYork, NY, USA, 1992. ISBN 0-89791-479-1.

[TJW+03] A. Tsai, A. Yezzi Jr, W. Wells, C. Tem-pany, D. Tucker, A. Fan, E. Grimson,and A. Willsky. A shape-based approachto the segmentation of medical imageryusing level sets. IEEE Transactions onMedical Imaging, 22(2):137–154, 2003.ISSN 0278-0062.

[TWS+08] J. Thiel, D. Weier, N. Srenivasulu,M. Strickert, N. Weichert, M. Melzer,T. Czauderna, U. Wobus, H. We-ber, and W. Weschke. DifferentHormonal Regulation of Cellular Dif-ferentiation and Function in Nucel-lar Projection and Endosperm Trans-fer Cells: A Microdissection-BasedTranscriptome Study of Young BarleyGrains. Plant Physiology, 148(3):1436–1452, 2008. ISSN 0032-0889.

[VMdH+03] P. H. B. De Visser, L. F. M. Marcelis,G. W. A. M. Van der Heijden, G. C. An-genent, J. B. Evers, P. C. Struik, andJ. Vos. 3D digitization and modeling offlower mutants of Arabidopsis thaliana.In International Symposium on plantgrowth modeling, simulation, visualiza-tion and their applications, pages 218–226. Tsinghua University Press, 2003.ISBN 7-302-07140-3.

[WFG92] L. C. Wanger, J. A. Ferwerda, and D. P.Greenberg. Perceiving Spatial Relation-ships in Computer-Generated Images.IEEE Computer Graphics and Applica-tions, 12(3):44–51, 54–58, 1992. ISSN0272-1716.

urn:nbn:de:0009-6-22579, ISSN 1860-2037

Page 22: VR Based Visualization and Exploration of Plant …Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8 VR Based Visualization and Exploration of Plant Biological Data

Journal of Virtual Reality and Broadcasting, Volume 6(2009), no. 8

[Wil78] L. Williams. Casting curved shadows oncurved surfaces. SIGGRAPH ComputerGraphics, 12(3):270–274, 1978. ISSN0097-8930.

[Wit97] A. Witkin. An introduction to physicallybased modeling: Particle system dy-namics. ACM Siggraph Course Notes,1997.

[XZZ06] F. Xiong, X. Zhao, and Y. Zhang. CIGRHandbook of Agricultural Engineering,Chapter 6 Management and DecisionSupport Systems, volume VI Informa-tion Technology, chapter 3-D Anima-tion and Virtual Reality, pages 425–434.American Society of Agicultural andBiological Engineers ASABE, 2006.

[YOB02] H. Yagou, Y. Ohtake, and A. Belyaev.Mesh smoothing via mean and medianfiltering applied to face normals. InGMP ’02: Proceedings of the Geo-metric Modeling and Processing - The-ory and Applications, pages 124–131.IEEE Computer Society, Washington,DC, USA, 2002. ISBN 0-7695-1674-2.

CitationWolfram Schoor, Felix Bollenbeck, Thomas SeidlDiana Weier, Winfriede Weschke, Bernhard Preim,Udo Seiffert, and Rudiger Mecke, VR BasedVisualization and Exploration of PlantBiological Data, Journal of Virtual Realityand Broadcasting, 6(2009), no. 8, January 2010,urn:nbn:de:0009-6-22579, ISSN 1860-2037.

urn:nbn:de:0009-6-22579, ISSN 1860-2037