automatic 3d view generation from a single 2d image for both indoor and outdoor scenes

Upload: ijcsa

Post on 14-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    1/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    DOI:10.5121/ijcsa.2013.3404 37

    Automatic 3D view Generation from a Single 2DImage for both Indoor and Outdoor Scenes

    Geetha Kiran A1 and Murali S21Malnad College of Engineering, Hassan, Karnataka, India

    [email protected]

    2Maharaja Institute of Technology, Mysore, Karnataka, India

    [email protected]

    ABSTRACT

    Image based video generation paradigms have recently emerged as an interesting problem in the field of

    robotics. This paper focuses on the problem of automatic video generation of both indoor and outdoor

    scenes. Automatic 3D view generation of indoor scenes mainly consist of orthogonal planes and outdoorscenes consist of vanishing point. The algorithm infers frontier information directly from the images using

    a geometric context-based segmentation scheme that uses the natural scene structure. The presence of

    floor is a major cue for obtaining the termination point for the video generation of the indoor scenes and

    vanishing point plays an important role in case of outdoor scenes. In both the cases, we create the

    navigation by cropping the image to the desired size upto the termination point. Our approach is fully

    automatic, since it needs no human intervention and finds applications, mainly in assisting autonomous

    cars, virtual walk through ancient time images, in architectural sites and in forensics. Qualitative and

    quantitative experiments on nearly 250 images in different scenarios show that the proposed algorithms

    are more efficient and accurate.

    KEYWORDS

    Floor segmentation,canny edge detector, hough transform, vanishing point, video generation

    1.INTRODUCTION

    Video generation from a single image is inherently a challenging problem. In Imaging devices,there is a trade-off between the images (snapshots) and video because of the limitation in storagecapacity. Video clips need more storage space compared to images. This motivated to generatethe video from a single 2D image rather than storing video clips. Humans analyze variety ofsingle image cues and act accordingly, unlike robots. The work is an attempt to make robotsanalyze similar to humans using single 2D image. The task of generating video from photographsis receiving increased attention in many of the applications. We are addressing here the key casewhere dimension of the real world object or measurement of object dimension in 2D plane isunknown. However generating video using above methods is very difficult because ofperspective view. Alternatively, video could be generated using proper ground known i.e., floorsegmentation in case of indoor scenes. In the absence of accurate measurements, we wish toexploit geometric characteristics (windows/doors) along with the color variations. Suchrelationships are plentiful in man-made structures and often provide sufficient information to ourwork. In case of Road scenes, video could be generated using proper ground known i.e.,vanishing point. We describe a unified framework for navigation through a single 2D image inlesser time. The input image may be easily acquired since no calibration target is needed or wecan download images from internet. The work is well-suited for navigation on Personal Digitalassistants(PDAs) and personal computers, includes cases where buildings are destroyed and onlythe archive images are available. It can also be applied in forensics and to assist autonomous cars

    by generating video from a single 2D image and assessing in advance - how far there is a straight

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    2/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    38

    road? If there is any suspected person or item in our path of journey, it could be detected priorand necessary action can be taken. We describe a unified framework for generating video from asingle 2D image. In the next section, a review on the related works is highlighted. Section 3 givesdescription of floor segmentation from single view scene constraints along with the computation

    of length of the floor. Section 4 gives description of finding the vanishing point from single viewscene constraints and computing the distance from the ground truth position to the vanishingpoint. This is followed by the method of 3D view generation in section 5. Finally, some of theexperimental results are presented in section 6 followed by conclusion in section 7.

    2.RELATED WORK

    It is observed that some methods have been developed for segmentation on a single image, few

    which are directly relevant to the work are highlighted here. Erick Delage et alhave used a graphbased segmentation algorithm to generate a partition of the image and assigned a uniqueidentifier to each partition output by the segmentation algorithm in [18]. Erick Delage et al[20]

    have built a probabilistic model that incorporates a number of local image features and tries to

    reason about the chroma of the floor in each column of the image. Ma Ling et al [22] hassegmented the floor region automatically by adopting clustering analysis and also have proposeda PCA based improved version of the algorithm to remove negative effect of shadow for

    segmented results. XueNan Cui et al [23] have proposed detecting and segmenting the floorby computing plane normals from motion fields in image sequences. A geometric characteristicthat objects are placed perpendicular to the ground floor can be utilized to find the floor in 2D

    images. Surfaces often have fairly uniform appearances in texture and color and thus imagesegmentation algorithms provide another set of useful features which can be used in many other

    applications, including video generation. Some of the methods developed for detecting

    vanishing point on a single image have been highlighted here. Techniques for estimating

    vanishing points can be roughly divided into two categories. One requiring the

    knowledge of the internal parameters of the camera and the other operates in anuncalibrated setting. A large literature exists on automatic detection of vanishing points,

    after Barnard [1] first introduced the use of the Gaussian Sphere as an accumulation

    space. He suggested that the unbounded space can be mapped into the bounded surface of

    the Gaussian sphere. Tuytelaars et al[2] mapped points into different bounded subspaces

    according to their coordinates. Rother [3] pointed out these methods could not preserve

    the original distances between lines and points. In this method, the intersections of all

    pairs of non-collinear lines are considered as accumulator cells instead of a parameter

    space. But these accumulator cells are difficult to index, searching for the maximal from

    the accumulator cells is slow. The simple calculation of a weighted mean of pairwise

    intersection is used by Caprile et al[4]. Researches [5-7] have used vanishing point as

    global constraint for road. They compute the texture orientation for each pixel and selectthe effective vote-points, then locate the vanishing point by using a voting procedure. Hui

    Kong et al[8-11] have proposed an adaptive soft voting scheme which is based upon a

    local voting region using high-confidence voters.

    However, there are some redundancies during the voting process and the accuracy on

    updating vanishing point. Murali S et al[12,13] have detected edges using canny edge

    detector and hough transform is applied. The maximum votes of first N number of cells

    in the hough space is used for computing the vanishing point. We use the similar

    framework [12-13] in our work to decide the vanishing point.A very few Researchers

    have proposed different methods for navigation through a Single 2D image. Shuqiang

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    3/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    39

    jiang et al [14] have proposed a method to automatically transform static images to

    dynamic video clips in mobile devices. Xian-sheng Hua et al [15] developed a system

    named photo2video to convert a photographic series into a video by simulating camera

    motions. The camera motion pattern (both the key-frame sequencing scheme and

    trajectory/ speed control strategy) is selected for each photograph to generate acorresponding motion photograph clip. A region based method to generate a multiview

    video from a conventional 2-dimensional video using color information to segment an

    image has been proposed by Yun-Ki-Baek et al [16]. Na-Eun Yang et al [17] have

    proposed method to generate depth map using local depth hypothesis and grouped

    regions for 2D-to-3D conversion. The various methods of converting 2D to stereoscopic

    3D images involves the fundamental, underlying principle of horizontal shifting of pixels

    to create a new image so that there are horizontal disparities between the original image

    and the new version. The extent of horizontal shift depends on the distance of the feature

    of an object to the stereoscopic camera that the pixel represents. It also depends on the

    inter-lens separation to determine the new image viewpoint.

    The methods proposed by the authors for floor segmentation is time consuming and have made

    certain assumptions specific to the application. These artifacts are not of much importance in ourwork, this made us to propose a simple method for floor segmentation in lesser time. Using thesegmented image, length of the floor could be computed by distance method. This helps in videogeneration. The methods proposed by the authors for detecting vanishing points have madecertain assumptions specific to the application. These artifacts are not of much importance in our

    work, this made us to propose a new method as proposed in [12,13], which decides the vanishingpoint in lesser time. Using the vanishing point, the distance from the ground truth position to the

    vanishing point could be computed. This helps in navigating through the single Road image.

    3.FLOOR SEGMENTATION

    The goal is to obtain floor segmentation of a given single 2D indoor image. The crucial part ofthe work is detecting the pixels belonging to the floor. There are methods available for floorsegmentation with known camera parameters. Requirements is to segment floor without havingknowledge of camera parameters. There is possibility to find the geometric relationship, may be

    using color. The primary steps involves converting the given color image to gray, further convertthe gray image to binary image by computing a global threshold. Finally, segment the floor byapplying the dilation and erosion methods.

    3.1. Segmentation

    The floor path [19] is the major cue to generate video from a single 2D image of indoor scenes.To segment the floor from the remaining parts of the indoor image scenes, dilation and erosiontechniques using the structuring elements are used.

    AssumingEto be a Euclidean space or an integer grid, A a binary image inE, andB a structuringelement.

    The dilation ofA byB is defined by:

    (1)

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    4/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    40

    The erosion ofA byB is given by:

    (2)

    Structuring element is used for probing and expanding the shapes contained in the input imageyielding to floor segmentation (Figure 1).

    (a) (b) (c) (d)

    Figure 1. (a) Original Image (b) Gray Image

    (c) Binary Image using Otsus method (d) Segmented Image

    The segmented image obtained ( Figure 1(d) ) is used to find the length of the floor. The distance

    between the start and end of the white pixel (row wise) from the floor segmented image is foundby using the Euclidean distance method. This length of the floor identified could directly be used

    to decide the number of frames to be generated, generally 1:2 depends on the length and it can bevaried with requirements. These frames are incorporated in the video generation.

    4.VANISHING POINT DETECTION

    Images considered for modeling are perspective. In a perspective image, lines parallel in the

    world space appear to meet at a point called Vanishing point. Vanishing points provide a stronggeometric cue for inferring information about 3 dimensional structure of a scene in almost allkinds of man-made environment. There are methods available for detecting vanishing pointswith known camera parameters and also with uncalibrated setting. The method described in thissection requires no knowledge of the camera parameters and proceeds directly from geometric

    relationships. The step involves detecting edges using canny edge technique to identify thestraight lines depending upon the threshold fixed by the hough transform, compute the vanishingpoint using the intersection points of the lines. The above steps have been explained in thesubsequent sections.

    4.1. Line Determination

    The given color image is converted to gray. Lines are edges of the objects and environment

    present in an image. These lines may or may not contribute to form the actual vanishing point.The existence of the lines are obtained by applying the canny edge detection algorithm. Theversatility of the canny algorithm is to adapt to various parameters like the sizes of the Gaussianfilter and the thresholds. This will allow it to be used in detecting edges of differingcharacteristics.

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    5/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    41

    The input image (Figure 2(a)) is converted to gray image (Figure 2(b)), the edges are detected byapplying Canny edge detection algorithm. A set of white pixels containing edges are obtainedand the rest of the contents of the image are removed. A Canny edge detected image (Figure

    2(c)) contains pixels contributing to straight lines and also other miscellaneous edges.Considering all these pixels of the edges contributing to the straight lines, Hough transformationis applied on the input image and the result (Figure 2(d)) is obtained as desired.

    (a) (b) (c) (d)

    Figure 2. (a) Original Image (b) Gray Image (c) Edge detection (d) Hough transformation

    As the outcome of the Hough transformation, a large number of straight lines are detected. Thesestraight lines depend upon the threshold fixed up for the Hough transformation. Points belongingto the same straight line in the image plane have corresponding sinusoids which intersect in asingle point in the polar space (Figure 1(d)). The need for calculating the number of straight linesis that there could be several straight lines in the image which intersects each other at differentpoints in the image plane. In such case there arises a situation that more than one peak value inthe polar space is obtained. Thus by selecting the number of peak values (in descending order of

    their votes) equal to the number of straight lines N present in the image we restrict theunwanted lines which may not contribute to the real vanishing point. This reduces the

    computational complexity of vanishing point detection to only the number of straight linescontributing the possible vanishing point.

    4.2. Intersection Point of any Two Lines

    Lines drawn by Hough transformation are on edges of the object and environment in an image.These lines may or may not contribute to form the actual vanishing point. Depending upon thenumber of lines present in the image, the number of peaks in the Hough space is fixed up in adescending order of their occurrences. Each peak in the hough space signifies the existence of alonger edge in the image than any other points in the Hough space and hence a peak is formed.

    These peaks of the voted points of the hough space are calculated to find the intersection betweentwo lines to calculate the vanishing point. Finding the intersection points for all combination oflines selecting two at a time, corresponding one (x,y) pair is obtained. The number of pairs of xand y values obtained for all combinations is given by the relation

    (3)

    where N is the number of peaks selected. These (x,y) pairs are the probable vanishing points. Allof them are within the vicinity of the actual vanishing point. We have taken the mean of the

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    6/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    42

    probable vanishing points (Figure 3(a)-blue color), since they are within the vicinity of the actualvanishing point. In our work vanishing point is used to find the distance from the ground truthposition to the detected Vanishing point position. The distance obtained is used in the nextsection to facilitate the termination point for the navigation. The distance of the Road identified

    could directly be used to decide the number of frames to be generated, generally 1:2 depends onthe length and it can be varied with requirements.

    (a) (b)

    Figure 3.(a) Lines detected (green Color),Probable Vanishing Points(blue Color)(b) Vanishing Point (Pink Color)

    5.3DVIEW GENERATION

    Automatic 3D view generation from a single image is inherently a challenging problem. Theproposed method has attempted to generate the 3D view (Algorithm) for both indoor and outdoorscenes.

    Algorithm

    Step 1: Read the Input Image

    Step 2: Compute the termination point using

    i ) Floor segmentation for indoor scenes

    ii)Vanishing point for outdoor scenes

    Step 3: Generate the frames based on the predefined rectangle

    Step 4: Navigate through the single 2D Image upto the termination point

    5.1 Indoor Scenes

    The information obtained in the floor segmentation is used to generate the 3D view. The input for

    the video generation are: single 2D image, computed termination point based on the distancecalculated using floor segmentation, the size of the rectangle based on which cropping takesplace. The input image is considered as the first frame and the image is cropped based on thesize of the predefined rectangle. The rectangle has to be clearly defined as it is the frame furtherused for 3D view generation. The floor segmentation plays a vital role in detecting thetermination point to generate the frames. Then the cropped image is resized to the original image

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    7/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    43

    and stored in an array of images. An appropriate set of key-frames (Figure 4) are determined foreach image based on the distance computed by using floor segmentation.

    (a) (b) (c)

    (d) (e) (f)Figure 4. (a) 20

    thFrame (b) 40

    thFrame (c) 60

    thFrame(d) 80

    thFrame (e) 100

    thFrame

    (f) 120th Frame

    5.2 Outdoor Scenes (Road Scenes)

    The information obtained from section 4 is used to navigate through a single Road image. Theinput for the navigation are - single 2D image, computed termination point based on the distance

    from the ground truth position to the detected vanishing point. Based on this strategy, the framesfor navigation are generated by cropping the image based on the size of the image up to the

    computed distance. The input image is considered as the first frame and the image is croppedbased on the size of the predefined rectangle. Then the cropped image is resized to the originalimage and stored in an array of images. An appropriate set of key-frames (Figure 5) are

    determined for each image based on the distance computed by using vanishing point.

    (a) (b)

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    8/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    44

    (c) (d)

    (e) (f)

    Figure 5. (a) 10th

    Frame (b) 20th

    Frame (c) 40th

    Frame(d) 70th

    Frame (e) 100th

    Frame(f) 180th Frame

    6.EXPERIMENTALRESULTS

    The algorithm is applied to a test set of 250 images obtained from different buildings, all of themare fairly different in interior decoration themes from each other. Since the indoor images

    contained a diverse range of orthogonal geometries (wall posters, doors, windows, boxes,cabinets etc.), we have observed that the results presented are indicative of the algorithmperformance on images of new buildings (interior) and scenes. We also have evaluated thealgorithm( Figure 6(a)) by manually detecting the floor path of a set of images and compared itwith the floor path generated by our method and the overall accuracy obtained from the result is91.46%. In case of outdoor scenes obtained from different real-road images in different scenariosthat mainly consist of single vanishing point, we have observed that the results presented areindicative of the algorithm performance. The images used in the experimentation are downloadedfrom internet and few of them are self captured. The steps involves detecting edges using cannyedge technique, to identify the straight lines, compute the vanishing point using the intersection

    points of the lines. All of them are within the vicinity of the actual vanishing point. Based on the

    ground truth position, we compute the distance from the ground truth position to the computedvanishing point. We also have evaluated the algorithm (figure 6(b)) by manually detecting thedistance from the ground truth value to the vanishing point and compared it with the distancegenerated by our method and the overall accuracy obtained from the result is 97%.

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    9/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    45

    (a) (b)

    Figure 6. (a) Comparison of the length of the floor computed manually with our method(b) Comparison of vanishing point accuracy computed manually with our method

    The first, intermediate and final frame generated by the methods for both Indoor and RoadScenes (Figure 7) after deducing the termination point are clearly viewed. The frames give the

    finer details in the intermediate and final frames that could be used in various applicationsincluding virtual walk through ancient time images, in forensics, in architectural sites and in

    automated vehicle.

    7.CONCLUSION

    An algorithm for automatic video generation from a single 2D image is proposed andexperimented for both indoor and outdoor images. This paper provides a solution to transform

    static single 2D image into video clips. It not only helps the users to enjoy the important details ofthe image but also provides a vivid viewing manner. The experimental results show that the

    algorithm is performing well on a number of indoor and outdoor scenes. The work is

    experimented on nearly 250 images in difficult scenarios. Further work can be extended toproduce videos including side view, working at planar level. This requires maintenance ofperspective view of the scene. Further work may be extended to include investigating on morereliable Region Of Interest (ROI) detection techniques. Even finer details can be obtained fromthe key frames used in video generation. The work is done in view of assisting the automatedvehicle and robots at low cost.

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    10/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    46

    REFERENCES[1] BARNARD S T, (1983) INTERPRETING PERSPECTIVE IMAGES; ARTIFICIAL INTELLIGENCE, 21,

    PP.435-462.

    [2] TUYTELAARS T, VAN GOOL L, PROESMANS M, MOONS T, (1998) THE CASCADED HOUGH

    TRANSFORM AS AN AID IN AERIAL IMAGE INTERPRETATION; IN: PROC. INTERNATIONAL CONFERENCE ONCOMPUTERVISION, PP.67-72.

    [3] ROTHER C,(2002) A NEW APPROACH FOR VANISHING POINT DETECTION IN ARCHITECTURAL

    ENVIRONMENTS;IMAGE AND VISION COMPUTING 20, PP.647-655.

    [4] B.CAPRILE AND V TORRE, (1990) USING VANISHING POINTS FOR CAMERA CALIBRATION;

    INTERNATIONAL JOURNAL OF COMPUTERVISION,4, PP.127-139

    [5] NICHOLAS SIMOND,PATRICKRIVES,(2003)HOMOGRAPHY FROM A VANISHING POINT IN URBAN

    SCENCES; IN PROCEEDINGS OF IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND

    SYSTEMS, PP.1005-1010.

    [6] CHRISTOPHER RASMUSSEN, (2004) GROUPING DOMINANT ORIENTATIONS FOR ILL-STRUCTURED

    ROAD; IN PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION AND PATTERN

    RECOGNITION, PP.470-477.

    [7] CHRISTOPHER RASMUSSEN, THOMMEN KORAH, (2005) ON-VEHICLE AND AERIAL TEXTURE

    ANALYSIS FOR VISION-BASED DESERT

    ROAD

    ;

    IN

    PROCEEDINGS OF

    INTERNATIONAL

    WORKSHOP ON

    COMPUTERVISION AND PATTERN RECOGNITION, PP.66-71.

    [8] HUI KONG,JEAN-YUES AUDIBERT,JEAN PONCE(2009)VANISHING POINT DETECTION FORROAD

    DETECTION;IN PROCEEEDINGS OF IEEECONFERENCE ON COMPUTERVISION AND PATTERN RECOGNITION,

    PP.96-103.[9] HUI KONG, JEAN-YUES AUDIBERT(2010) JEAN PONCE, GENERAL ROAD DETECTION FROM A

    SINGLE IMAGE;IN PROCEEDINGS OF IEEETRANSACTIONS ON IMAGE PROCESSING, VOL.19, NO.8,PP.2211-

    2220.

    [10] MNIETO AND LSALSGDO,(2007)REAL-TIME VANISHING POINT ESTIMATION IN ROAD SEQUENCES

    USING ADAPTIVE STEERABLE FILTER BANKS; ADVANCED NOTES IN COMPUTERSCIENCE.

    [11] CHRISTOPHERRASMUSSEN,(2004)TEXTURE-BASED VANISHING POINT VOTING FORROAD SHAPEESTIMATION; BMVC.

    [12] AVINASH AND MURALI S,(2005),AVOTING SCHEME FOR INVERSE HOUGH TRANSFORM BASED

    VANISHING POINT DETERMINATION; IN PROCEEDINGS OF INTERNATIONAL CONFERENCE ON COGNITIONAND RECOGNITION,MYSORE,INDIA.

    [13] AVINASH AND MURALI S,(2007)MUTIPLE VANISHING POINT DETERMINATION;IN PROCEEDINGS

    OF IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION AND INFORMATION TECHNOLOGY,

    AURANGABAD,INDIA.

    [14] SHUQIANG JIANG AND HUIYING LIU AND ZHAO ZHAO AND INGMING HUANG AND WEN GAO,

    (2007) GENERATING VIDEO SEQUENCE FROM PHOTO IMAGE FOR MOBILE SCREENS BY CONTENT

    ANALYSIS;ICME, PP.1475-1478.

    [15] XIAN-SHENG HUA AND LIE LU AND HONG-JIANG ZHANG,(2004) AUTOMATICALLY CONVERTING

    PHOTOGRAPHIC SERIES INTO VIDEO; 12TH ACMINTERNATIONAL CONFERENCE ON MULTIMEDIA,PP.708-715.

    [16] YUN-KI BAEK,YOUNG-HO SEO,DONG-WOOKKIM AND JI-SANG YOO,(2012)MULTIVIEW VIDEO

    GENERATION FROM 2-DIMENSIONAL VIDEO; INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING,

    INFORMATION AND CONTROL,VOL 8,NUMBER5(A), PP.3135-3148.[17] NA-EUN YANG,JI WON LEE,RAE-HONG PARK,(2012)DEPTH MAP GENERATION FROM A SINGLEIMAGE USING LOCAL DEPTH HYPOTHESIS; 2012IEEEICCE,PP.311-312.

    [18] ERICKDELAGE HONGLAKLEE ANDREW Y.NG,(2006)A DYNAMIC BAYESIAN NETWORK MODEL

    FOR AUTONOMOUS 3D RECONSTRUCTION FROM A SINGLE INDOOR IMAGE;CVPR.

    [19] GEETHA KIRAN,A.AND MURALI,S, (2013) AUTOMATIC VIDEO GENERATION USING FLOOR

    SEGMENTATION FROM A SINGLE 2D IMAGE 21ST WSCG 2013 CONFERENCE ON COMPUTER GRAPHICS,

    VISUALIZATION AND COMPUTERVISION,ISBN978-80-86943-76-3.

    [20] ERICK DELAGE, HONGLAK LEE, AND ANDREW Y.NG, (2005) AUTOMATIC SINGLE-IMAGE 3D

    RECONSTRUCTIONS OF INDOORMANHATTAN WORLD SCENES;ISRR.

    [21] D.HOIEM,A.A.EFROS, AND M.HEBERT,(2005)GEOMETRIC CONTEXT FROM A SINGLE IMAGE;

    10TH IEEEINTERNATIONAL CONFERENCE ON COMPUTERVISION.

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    11/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    47

    [22] MA LING , WANG JIANMING ; ZHANG BO ; WANG SHENGBEI,(2010) AUTOMATIC FLOOR

    SEGMENTATION FOR INDOOR ROBOT NAVIGATION; 2ND INTERNATIONAL CONFERENCE ON SIGNAL

    PROCESSING SYSTEMS (ICSPS),PP.684689.

    [23] XUE-NAN CUI, YOUNG-GEUN KIM, AND HAKIL KIM,(2009) FLOOR SEGMENTATION BY

    COMPUTING PLANE NORMALS FROM IMAGE MOTION FIELDS FOR VISUALNAVIGATION; INTERNATIONAL

    JOURNAL OF CONTROL,AUTOMATION, AND SYSTEMS, PP.788-798.[24] PEDRO F. FELZENSZWALB AND DANIEL P. HUTTENLOCHE, (2004) EFFICIENT GRAPH-BASED

    IMAGE SEGMENTATION,INTERNATIONAL JOURNAL OF COMPUTERVISION.

    [25] YOUNG GEUN KIM AND HAKIL KIM, (2004) LAYERED GROUND FLOOR DETECTION FOR VISION

    BASED MOBILE ROBOT NAVIGATION.;IN IEEEROBOTICS AND AUTOMATION (ICRA), VOLUME 1, PP 1318.

    [26] Y.J.JUNG,A.BAIK,J.KIM, AND D.PARK,(2009)A NOVEL 2D-TO-3D CONVERSION TECHNIQUE

    BASED ON RELATIVE HEIGHT DEPTH CUE;IN PROC.STEREOSCOPIC DISPLAYS AND APPLICATIONS XX, VOL.

    7237.

    [27] C.-C.CHENG,C.-T.LI, AND L.-G.CHEN,(2010)A NOVEL 2D-TO-3D CONVERSION SYSTEM USING

    EDGE INFORMATION;IEEETRANS.CONSUMERELECTRONICS, VOL.56, NO.3, PP.17391745.

    [28] R.C.GONZALEZ AND R.E.WOODS,(2010),DIGITAL IMAGE PROCESSING,THIRD EDITION.UPPER

    SADDLE RIVER,NJ:PEARSON EDUCATION INC.

    [29] W.-N.LIE,C.-Y.CHEN, AND W.-C.CHEN.(2011)2D TO 3D VIDEO CONVERSION WITH KEY-FRAME

    DEPTH PROPAGATION AND TRILATERAL FILTERING;ELECTRON.LETT., VOL.47, NO.5, PP.319321.[30] A. CRIMINISI, I. REID, AND A. ZISSERMAN,(2000) SINGLE VIEW METROLOGY. INTERNATIONAL

    JOURNAL OF COMPUTERVISION;40:123148.[31] FENG HAN,SONG-CHUN ZHU,(2004)AUTOMATIC SINGLE VIEW BUILDING RECONSTRUCTION BY

    INTEGRATING SEGMENTATION; CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION

    WORKSHOP.

  • 7/29/2019 Automatic 3D View Generation From a Single 2D Image for Both Indoor and Outdoor Scenes

    12/12

    International Journal on Computational Sciences & Applications (IJCSA) Vol.3, No.4, August 2013

    48

    Figure 7. (a)First Frame (b) Intermediate Frame (c) Final Frame