target position designation methods for parking assistant systemweb.yonsei.ac.kr/hgjung/ho gi jung...

196
Target Position Designation Methods for Parking Assistant System Ho Gi Jung The Graduate School Yonsei University School of Electrical and Electronic Engineering

Upload: others

Post on 19-Jan-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

Target Position Designation Methods for

Parking Assistant System

Ho Gi Jung

The Graduate School

Yonsei University

School of Electrical and Electronic Engineering

Page 2: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

Target Position Designation Methods for

Parking Assistant System

A Dissertation

Submitted to the School of Electrical and Electronic Engineering

and the Graduate School of Yonsei University

in partial fulfillment of the

requirements for the degree of

Doctor of Philosophy

Ho Gi Jung

August 2008

Page 3: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

This certifies that the dissertation of

Ho Gi Jung is approved.

_______________________________________

Thesis Supervisor: Jaihie Kim

_______________________________________

Kwanghoon Sohn

_______________________________________

Sangyoun Lee

_______________________________________

Pal Joo Yoon

_______________________________________

Alberto Broggi

The Graduate School

Yonsei University

August 2008

Page 4: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

감사의감사의감사의감사의 글글글글

본 논문을 논리적이고 체계적으로 작성할 수 있도록 지도해주신 김재

희 교수님, 손광훈 교수님, 이상윤 교수님, 윤팔주 박사님, Alberto Broggi 교

수님께 깊은 감사의 말씀 드립니다. 특히, 학부, 석사, 박사 과정의 지도교

수님으로서 학문과 생활의 지침을 주시고, 성공적인 직장 선택을 도와 주

시고, 배우자와의 인연을 만들어 주신 김재희 교수님께는 앞으로 부끄럽지

않은 제자가 되기 위해 최선을 다할 것이라는 말씀 드리고 싶습니다. 제가

박사 진학에 실패하고 좌절할 때, ‘배우고자 하는 사람에게는 언젠가 길이

생긴다’고 격려해주시던 교수님께 이렇게 학위논문을 완성하는 모습을 보

여드릴 수 있어서 자랑스럽습니다.

부족한 저에게 많은 기회를 주시고 성심껏 의견을 들어 주심으로써 회

사생활의 의욕을 불러일으켜 주신 황인용 전무님, 8년이라는 긴 시간을 포

용력 있는 상사로 지도해주신 윤팔주 상무님, 두 분이 계셨기에 보람차게

회사생활을 할 수 있었습니다.

부족한 저를 믿고 최선을 다해준 김동석 대리, 이윤희 대리, 조영하 대

리, 이준희 연구원에겐 감사한 마음과 미안한 마음이 듭니다. 항상 밝은 모

습으로 저의 부정적인 생각에 활력을 불어 넣어준 김동석 대리는 끝까지

그 밝은 모습을 간직하길 기원합니다. 학위과정 중에 저의 빈 자리를 묵묵

하게 대신해 준 이윤희 대리에겐 감사한 마음을 전합니다.

Page 5: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

파트타임으로 외로울 뻔 했던 학교생활을 재미있고 보람 있게 만들어

준 후배 서재규 군과 윤소원 양에게도 감사의 마음을 전합니다. 유일하게

같은 연구 주제를 공유하며 끊임 없는 탐구욕으로 저를 자극해 준 서재규

군에겐, 후배로서뿐만 아니라 공동연구자로서도 훌륭한 파트너였다고 말해

주고 싶습니다. 띠 동갑인 많은 나이차를 극복하고 새로운 분야를 함께 개

척해준 윤소원 양은 연구를 처음 시작할 때의 초심을 일깨워 주곤 했습니

다. 유학이라는 큰 도전에서도 성공하길 기원합니다.

아버지, 어머니, 이제서야 박사학위 논문을 드릴 수 있게 되어 죄송합

니다. 하지만, 그렇기에 더욱 기쁘고 자랑스러워 해주실 거라 믿습니다. 욕

심 부리지 말고 항상 근면해야 한다는 생활철학을 몸으로 실천해 보여주시

며, 저를 믿어 주신 부모님께 감사 드립니다. 같은 전자공학을 전공하고 비

슷한 시기에 사회생활을 경험하면서 항상 친구처럼 선배처럼 충고 해준 형

님께도 감사 드립니다.

혜림이, 혜정이, 너희들을 보면서 항상 편안한 일상을 감사하고, 따뜻

한 마음으로 행복할 수 있었단다. 혜림이 혜정이가 바라는 만물박사는 못

되도 훌륭한 박사가 되도록 계속 노력할게. 고단한 일상을 함께하고 어려

운 고비를 이겨내 준 나의 아내, 강은정. 당신이 옆에 있어줘서 모든 것이

가능했다고 생각합니다. 이 논문을 당신께 바칩니다.

2008년 8월 정호기 드림

Page 6: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

i

Contents

List of Figures ..............................................................................................................v

List of Tables...............................................................................................................xi

Abstract ..................................................................................................................... xii

1 Introduction ..............................................................................................................1

1.1 Parking Assistant System ................................................................................1

1.2 Target Position Designation Methods..............................................................2

1.3 Structure of Dissertation..................................................................................4

2 Background...............................................................................................................6

2.1 User Interface-Based Methods ........................................................................6

2.2 Parking Slot Markings-Based Methods...........................................................7

2.3 Free Space-Based Methods .............................................................................8

2.4 Infrastructure-Based Methods .......................................................................11

3 GUI-Based Method.................................................................................................12

3.1 Introduction ...................................................................................................12

3.2 Three Coordinate Systems.............................................................................13

3.3 Drag and Drop Concept.................................................................................15

3.4 Mode Selection..............................................................................................16

3.5 Translation and Rotation Operation...............................................................18

4 Parking Slot Markings Recognition-Based Method............................................19

4.1 Introduction ...................................................................................................19

4.2 One Touch Type.............................................................................................23

4.2.1 Marking Line-Segment Recognition by Directional Intensity Gradient

Page 7: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

ii

.............................................................................................................23

4.2.2 Recognition of Marking Line-Segment Direction..............................25

4.2.3 Determination of Guideline.................................................................28

4.2.4 Determination of Separating Line-Segments ......................................29

4.3 Two Touch Type ............................................................................................32

4.3.1 System Structure .................................................................................32

4.3.2 Initial Direction Establishment ...........................................................33

4.3.3 Target Pattern Detection......................................................................34

4.3.4 Target Parking Position Establishment................................................40

4.4 Full-automatic Markings Recognition-Based Method ..................................42

4.4.1 Bird’s Eye View Edge Image ..............................................................42

4.4.2 Hough Transform of Edge Image........................................................42

4.4.3 Marking Line Recognition ..................................................................45

4.4.4 Line-segment Recognition ..................................................................48

4.4.5 Guideline Recognition ........................................................................49

4.4.6 Dividing Marking Line-Segment ........................................................52

5 Free Space-Based Method .....................................................................................54

5.1 Binocular Stereo-Based Method....................................................................54

5.1.1 Stereo Vision System ..........................................................................55

5.1.2 Pixel Classification..............................................................................56

5.1.3 Feature-Based Stereo Matching ..........................................................57

5.1.4 Road/Object Separation ......................................................................59

5.1.5 Location of Parking Site......................................................................62

Page 8: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

iii

5.2 Light Stripe Projection-Based Method..........................................................66

5.2.1 System Configuration..........................................................................66

5.2.2 Light Stripe Detection .........................................................................68

5.2.3 Light Stripe Projection Method...........................................................70

5.2.4 Occlusion Detection............................................................................75

5.2.5 Pivot Detection....................................................................................76

5.2.6 Recognition of Opposite-Site Reference Point ...................................77

5.2.7 Target Parking Position .......................................................................78

5.3 Motion Stereo-Based Method........................................................................80

5.3.1 Point Correspondences........................................................................81

5.3.2 3D Reconstruction...............................................................................82

5.3.3 Degradation of 3D Structure near the Epipole ....................................85

5.3.4 De-rotation-Based Feature Selection and 3D Structure Mosaicking ..87

5.3.5 Metric Recovery..................................................................................91

5.3.6 Free Parking Space Detection .............................................................94

5.4 Scanning Laser Radar-Based Method ...........................................................98

5.4.1 Removal of Invalid and Isolated Data.................................................99

5.4.2 Occlusion Detection..........................................................................101

5.4.3 Fragmentary Cluster Removal ..........................................................102

5.4.4 Corner Detection ...............................................................................102

5.4.5 Rectangular Corner Detection...........................................................104

5.4.6 Round Corner Detection ...................................................................110

5.4.7 Main-Reference Corner Recognition ................................................117

Page 9: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

iv

5.4.8 Subreference Corner Detection and Target Parking Position

Establishment ....................................................................................121

5.4.9 Extension to Parallel Parking and Sensor Price Problem..................123

6 Experimental Results ...........................................................................................125

6.1 GUI-Based Method .....................................................................................125

6.1.1 Experimental Method........................................................................125

6.1.2 Test Results .......................................................................................128

6.2 Parking Slot Markings-Based Methods.......................................................131

6.2.1 Experimental Results of One Touch Type.........................................131

6.2.2 Experimental Results of Two Touch Type ........................................132

6.2.3 Experimental Results of Full-automatic Method..............................136

6.3 Free Space-Based Methods .........................................................................139

6.3.1 Experimental Results of Binocular Stereo-Based Method................139

6.3.2 Experimental Results of LSP-Based Method....................................140

6.3.3 Experimental Results of Motion Stereo-Based Method....................141

6.3.4 Experimental Results of Scanning Laser Radar-Based Method........149

7 Conclusion.............................................................................................................158

7.1 Contributions...............................................................................................158

7.2 Future Works ...............................................................................................160

7.3 Prospective and Strategy .............................................................................161

Bibliography ............................................................................................................165

Summary (in Korean) .............................................................................................177

Page 10: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

v

List of Figures

3.1 Typical installation of camera and HMI ........................................................13

3.2 Construction procedure of bird’s eye view image.........................................14

3.3 Target position rectangle as moving and rotating cursor ...............................16

3.4 Cross-product between a rectangle-side and corner-pointing........................17

3.5 Determining whether a point is in a rectangle or not.....................................17

3.6 Transformation calculation............................................................................18

4.1 System configuration of semi-automatic parking system and one-touch type

concept .........................................................................................................20

4.2 Two-touch type concept.................................................................................21

4.3 Procedure of marking line-segment recognition............................................24

4.4 Directional intensity gradient of local window around a cross-point............26

4.5 Estimation of line-segment direction by model based fitness estimation......27

4.6 Edge following refines the direction of detected marking line-segments .....28

4.7 Guideline recognized by structural relation between cross-points ................29

4.8 Measuring Ion(s) and Ioff(s) bi-directionally to find both-side ‘T’-shape

junctions.......................................................................................................30

4.9 Lseparating(s) can find separating line-segments irrespective of local intensity

variation .......................................................................................................31

4.10 The flow chart of two-touch type method ...................................................32

4.11 The longitudinal direction of target position can be initialized with two see

points............................................................................................................34

4.12 Rectified image for two type parking slot markings ...................................35

Page 11: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

vi

4.13 T-shape target pattern detection...................................................................37

4.14 Π–shape target patttern detection ................................................................39

4.15 Target pattern detection result and target parking position establishment

result.............................................................................................................41

4.16 Bird’s eye view image and image................................................................42

4.17 Hough transform of a parallel line pair .......................................................45

4.18 Structure of 3D filter....................................................................................47

4.19 Recognized marking lines ...........................................................................48

4.20 Recognized marking line-segments.............................................................49

4.21 Distance between a point and a line-segment..............................................50

4.22 Guideline is likely to be normal to the gaze ................................................51

4.23 Recognized guideline ..................................................................................52

4.24 Recognized dividing makring line-segments...............................................53

5.1 Stereo camera installed on test vehicle..........................................................55

5.2 Stereo image ..................................................................................................55

5.3 Pixel classification result ...............................................................................57

5.4 Stereo matching result ...................................................................................58

5.5 Road/object separation result.........................................................................59

5.6 Bird’s eye view of parking side marking.......................................................61

5.7 Obstacle depth map .......................................................................................62

5.8 Recognized guideline ....................................................................................63

5.9 Obstacle histogram ........................................................................................64

5.10 Seed point and search range ........................................................................64

Page 12: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

vii

5.11 Detected parking site ...................................................................................65

5.12 System configuration...................................................................................67

5.13 Rectified difference image...........................................................................68

5.14 Detected light stripe.....................................................................................70

5.15 Nomalization of configuration.....................................................................70

5.16 Relation between light plane and light stripe image....................................71

5.17 3D recognition result ...................................................................................74

5.18 Detected occlusion points and occlusion.....................................................76

5.19 Pivot detection result ...................................................................................76

5.20 Opposite-side reference point detection result............................................77

5.21 Established target position...........................................................................79

5.22 Location of the epipole in a typical parking situation .................................86

5.23 3D error near the epipole.............................................................................87

5.24 De-rotation-based feature selection .............................................................88

5.25 Epipole locations .........................................................................................90

5.26 Effect of features selection and 3D mosaicking methods............................91

5.27 Configuration of rearview camera...............................................................92

5.28 Density of the Y-axis coordinates of the 3D points .....................................93

5.29 Result of metric recovery ............................................................................94

5.30 Outline point selection.................................................................................95

5.31 Opposite side vehicle detection ...................................................................97

5.32 Detected free parking space depicted on the last frame of the image

sequence.......................................................................................................97

Page 13: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

viii

5.33 Situation wherein free parking space is between parked vehicles.............100

5.34 Range data after the removal of invalid data and isolated data .................100

5.35 Occlusion detection result .........................................................................101

5.36 Result of fragmentary cluster removal ......................................................102

5.37 Flowchart of corner detection....................................................................103

5.38 Points before Pn should satisfy l1 and points after Pn should satisfy l2......105

5.39 Rectangular corner filtering error and corner candidate............................106

5.40 The vertex of corner is refined to the crosspoint of two recognized lines.107

5.41 Cluster having line shape also has small Εrectangular corner(C, n) value ..........108

5.42 Comparison of Εcorner(C) between corner and line case.............................109

5.43 Newly developed rectangular corner detection is unaffected by orientation

androbust to noise ......................................................................................110

5.44 Effect of round corner detection................................................................111

5.45 Results of round corner detection when the cluster is curve-line type ......117

5.46 Corners that remained only within ROI ....................................................118

5.47 Free parking space condition.....................................................................120

5.48 Corners satisfying free parking space conditions ......................................120

5.49 Recognized main-reference corner............................................................121

5.50 Subreference corner projection and outward border .................................122

5.51 Established final target parking position ...................................................123

5.52 A case when outward boarder is set by the projection of subreference corner

...................................................................................................................123

6.1 Garage parking cases in bird’s eye view image...........................................126

Page 14: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

ix

6.2 Garage parking cases in distorted image .....................................................127

6.3 Parallel parking cases in bird’s eye view image..........................................127

6.4 Parallel pakring cases in distroted image.....................................................128

6.5 Case with adjacent vehicles and torn markings...........................................131

6.6 Case with strong sunshine causing locally changing intensity....................132

6.7 Established target parking position in the case of 11-shape type ................133

6.8 Established target parking position in the case of rectangular type.............133

6.9 Result when another maring line is drawn in front of target parking position

...................................................................................................................134

6.10 Result when dark shadow is cast on the near of target position ................134

6.11 Result when the target position is far.........................................................135

6.12 Case study with occlusion by adjacent vehicles........................................137

6.13 Detected parking site .................................................................................139

6.14 Target position when neighboring side is a vehicle ...................................140

6.15 Recognized target position between vehicles ............................................140

6.16 Recognized target position when adjacent objects have a larget difference in

depth...........................................................................................................141

6.17 Fisheye camera and laser scanner mounted on the automobile .................142

6.18 3D reconstruction accuracy .......................................................................143

6.19 Successful detection examples ..................................................................145

6.20 Four types of failures.................................................................................146

6.21 Accuracy evaluation results.......................................................................148

6.22 Sensor installation .....................................................................................149

Page 15: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

x

6.23 Cloudy day at outdoor parking lot.............................................................150

6.24 Case when range data is disconnected and noisy......................................151

6.25 Case when the adjacent vehicles have different depth position and there are

various objects around the free parking space ...........................................152

6.26 Outdoor parking lot in cloudy weather surrounded by various objects.....152

6.27 Case against the sun and vehicle with arround front-end ..........................153

6.28 Free parking space is between vehicle and pillar in underground pakring lot

...................................................................................................................154

6.29 Free parking space is between pillar and vehicle in underground parking lot

...................................................................................................................154

6.30 Case with lack vehicle in underground parking lot ...................................155

6.31 Free parking space is between a light truck and a van ..............................156

6.32 Free parking space is between a sedan and a large truck. The back round

includes a flower garden and apartment with repetitive pattern ................156

6.33 One of two failed situation ........................................................................157

7.1 Prospective diagram of parking assistant system........................................163

7.2 Combination and evolutionary strategy.......................................................164

Page 16: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

xi

List of Tables

6.1 Operation time average of garage parking situations ..................................129

6.2 Operation time average of parallel parking situations.................................129

6.3 Clicking number average of garage parking situations ...............................130

6.4 Clicking number average of parallel parking situations ..............................130

Page 17: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

xii

Abstract

Target Position Designation Methods for Parking Assistant System

Ho Gi Jung

School of Electrical and Electronic Engineering

The Graduate School

Yonsei University

This dissertation summarizes newly developed target parking position

designation methods and provides a prospective of parking assistant system. The

achievements cover almost every approach for target position designation method.

Because each achievement is creative and shows better performance than

competitors’, we expect they establish a bridgehead for the participation in the next

generation market.

The developed methods include drag and drop interface-based, semi-automatic

markings recognition-based, full-automatic markings recognition-based, binocular

stereo-based, light stripe projection-based, motion stereo-based and scanning laser

radar-based. Particularly, drag and drop interface-based and semi-automatic markings

recognition-based is so efficient and light that they are expected to be integrated into

microprocessor-graded ECU immediately.

Scanning laser radar-based method shows the best performance. Therefore, if

they can overcome ‘higher price’ and ‘bigger size’ pitfalls, scanning laser radar could

find new application area and parking assistant system acquires a reliable method for

Page 18: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

xiii

the recognition of free space between parked vehicles. The developed method is

based on a novel rectangular corner detection and round corner detection, which are

robust to noise and irrespective of rotation and scale.

Light stripe projection-based method shows the possibility of a solution for

underground parking lots. The developed motion stereo-based method identifies the

cause of 3D information degradation and shows that 3D information mosaicking can

solve the problem. Full-automatic markings recognition-based and binocular stereo-

based shows that image analysis technology of still images can find the proper

parking position.

At the end of this dissertation, the contributions and future works are listed, and

then technology prospective diagram derived from publications and news is depicted.

Furthermore, considering our competitive power and predicted technology roadmap,

this dissertation suggests four-step evolutionary strategy.

Key words: Target parking position designation, drag and drop interface, parking slot

markings, scanning laser radar, light stripe projection, stereo vision.

Page 19: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

Chapter 1

Introduction

1.1 Parking Assistant System

Recently, customers have shown a growing interest in parking aid products.

According to J. D. Power’s ‘2001 Emerging Technologies Study’, 66% of customers

indicated that they were likely to purchase parking aid products [1]. The ‘Top 10

High-Tech Car Safety Technologies’ published by Tori Tellem included the rearview

camera [2]. Such customer interests were connected to actual purchases. In 2003,

Toyota Prius launched an adopted IPA (intelligent parking assist) automating the

steering maneuver of backup parking as an optional feature, which was opted by

about 80% of Prius buyers [3]. Based on customers’ increasing interest and the

successful introduction of early products, many car and component manufacturers are

competitively preparing the release of more parking aid products [4], [5].

Various types of parking aid products are being developed:

- Displaying predicted path based on steering angle and distance markings

graphically on the rearview image [6], [7].

- Displaying an overhead-view image around the subjective vehicle by mosaicking

images captured from multiple cameras pointed toward various directions, or by

mosaicking consecutive images captured from a rearward camera with obstacle

information detected by ultrasonic sensors [8], [9].

- Informing the driver of the required steering maneuver by visual or audio interface

Page 20: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

[10]–[13].

- Automating the steering maneuver to drive the subjective vehicle to an initial

position required for safe and convenient parking operation [14].

- Automating the steering maneuver from an initial position to the target parking

position [3]–[6], [13], [15]–[19].

- Fully automatic parking [20], [21].

The preferred type of parking aid product varies according to the customer’s

regional characteristics. In Europe, parallel parking is dominant, whereas in Asia,

customers are more interested in perpendicular parking. On the other hand, in

America, there is great demand for backward monitoring and private garage solutions.

1.2 Target Position Designation Methods

A parking aid system consists of target position designation, path planning,

and parking guidance by user interface or path tracking by active steering. The

methods of target position designation can be divided into four categories:

(1) User interface-based method: An interactive method with steering maneuver and

augmented display [7], locates target position with arrow buttons on GUI

(graphical user interface) [6], and moves and rotates target position by drag-and-

drop operation like a cursor [22].

(2) Parking slot marking-based method: It automatically recognizes a parking slot

by image understanding [23] – [26], and utilizes hints provided by the driver

through a touch screen [25].

(3) Method based on free space between parked vehicles: It recognizes free parking

Page 21: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

space between parked vehicles using various sensing technologies. According to

the type of sensor and the signal processing technology being used, the methods

can be divided into seven categories: ultrasonic sensor-based method, short

range radar (SRR) sensor-based method, single-image understanding-based

method, motion stereo-based method, binocular stereo vision-based method,

light stripe projection-based method and scanning laser radar-based method.

(4) Infrastructure-based method: It designates target position utilizing local global

positioning system (GPS), digital map, and communication with parking

management system [11], [12].

Although developed methods achieved partial success, each has its drawbacks.

Vision-based methods, including manual designation, cannot be used in dark

illumination conditions. In particular, specular reflection and the smooth surface of

the typical vehicle degrade the performance of binocular stereo and motion stereo.

Moreover, if the color of a vehicle is dark, such as black, any feature extractor cannot

detect useful feature points on the vehicle surface. Light stripe projection-based

methods cannot be used outdoors during daytime because sunlight will overpower the

entire spectrum range of the imaging device [39]. It is reported that the ultrasonic

sensor in parallel parking situations acquires practically useful range data because the

incident angle between the sensor and the side-facet of the objective vehicle is

approximately zero. However, in perpendicular parking situations, the incident angle

between the ultrasonic sensor and the side-facets of adjacent vehicles is so large that

range recognition usually fails [29]. Although SRR is expected to robustly detect the

existence of vehicles, it is not applicable for detecting vehicle boundary on the near

Page 22: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

side of the subjective vehicle. In general, SRR has low angular accuracy and outputs

only for a limited number of range data. Furthermore, the outputs are not

deterministic because response strengths are considerably sensitive to the object’s

shape, orientation, and reflectance [30]. Scanning laser radar showed ideal

performance in garage parking situation, but should overcome higher cost and big

size problem [50]. As infrastructure-based methods require enormous investment for

parking management system and users should adopt automated vehicle to use this

benefit, practical application of this technology is supposed to be delayed [11], [12].

Therefore, it is impossible to find one solution that is almighty for every

situation and for now and future. Instead, combination of various methods and

evolutionary application is supposed to be practical strategy.

1.3 Structure of Dissertation

This dissertation summarizes newly developed target position designation

methods and provides a prospective of parking assistant system. As ultrasonic sensor-

based method has been dominant in parallel parking situation and has been developed

to a practical level, this dissertation have been focused on target position designation

methods for garage parking situation, or perpendicular parking situation.

As mentioned in the previous section, there is no almighty solution for every

situation. We had to find out efficient combination of various methods, and

simultaneously should devise creative and original methods in each category.

According to the previously mentioned categories, newly developed methods are

divided into three categories: GUI-based, parking slot marking-based, and free space-

Page 23: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

based. Chapter 2 overviews previously developed target position designation methods

in more detail. Chapter 3 explains developed GUI-based method, which incorporates

drag and drop concept into parking application. Chapter 4 explains parking slot

markings recognition-based methods: one touch type specific for box-type parking

slot markings, two touch type developed for parking slot markings without guideline,

and full-automatic method. Chapter 5 explains free space-based methods: binocular

stereo-based method, light stripe projection-based method developed for underground

parking lots, motion stereo-based method, and scanning laser radar-based method. In

chapter 6, developed methods are verified by various experiments. Finally, the

conclusion, chapter 7, will summarize contributions, future works and suggest

evolutionary application strategy.

Page 24: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

Chapter 2

Background

The methods of target position designation can be divided into four categories:

user interface-based method, parking slot marking-based method, free space-based

method and infrastructure-based method. This chapter reviews previous achievements

of each method.

2.1 User Interface-Based Methods

Aisin Seiki devised an interactive method with steering maneuver and

augmented display for parallel parking situation [7]. Driver backs up until the vertical

pole shown on the display comes to the rear and the forward parked vehicle. The

parking frame indicating the target parking position appears on the display as the

steering wheel is turned, moves according to the steering wheel angle. Placing this

parking frame at the final target position to park will determine both target parking

position and the steering angle for the first turn at the same time.

Toyota devised an arrow button-based GUI for target position designation

when the first IPA (Intelligent Parking Assist) was applied to Prius [6]. The system

displays the current target position by drawing the corresponding rectangle on the rear

view image. Besides the target position, the system draws arrow buttons on the

display; four directional buttons for parallel parking situation, eight directional

buttons and two rotational buttons for garage parking situation. The driver locates the

Page 25: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

current target position to the desired by repeatedly clicking the arrow buttons.

Tokyo University, Toshiba and Yazaki developed infrastructure-based target

position designation method and GUI-based steering guidance system [11][12]. Once

the system establishes the target position with infrastructure, it informs the driver of

the desired steering angle by showing desired tire position graphically on its user

interface. Driver can align actual steering angle to the desired by maneuvering the

steering wheel.

2.2 Parking Slot Markings-Based Methods

Jin Xu et al. developed parking slot markings-based target position

designation method assuming that the colors of parking slot’s marking are quite

uniform and different from the background [23]. The system extracts parking slot

markings by color segmentation method based on RCE neural network. Color

(formulated in HIS color space) of an object of interest is learned by the training

process of a RCE neural network. When the pixels corresponding to the markings

have been segmented out, the system uses the outline of these pixels as the geometric

feature. By scanning through the processed image column from bottom to top, a set of

isolated points of the contour are acquired. Then by estimating the two lines of this

contour (using least square method), the equations for the two lines in the image plane

are calculated.

Aisin Seiki and Toyota developed parking slot marking recognition algorithm

for straight line type and horseshoe type [24]. The algorithm consists of six steps:

ROI setting, filtering (primary differentiation), binarization, feature point

Page 26: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

transformation (2D to 3D), RANSAC (Random Sample Consensus)-based straight

line extraction, and target position establishment. For improvement in recognition

performance and reduction in processing time, a ROI is used. The system constantly

calculates based on the vehicle position and the vehicle deflection angle just before

backing up. As the virtual target position is calculated via vehicular swept path and

deflection angle, the accuracy of position is low. However, it is possible to eliminate

the outliers well through appropriate ROI size. As a result, successful location of the

target parking spot could be obtained from 85% of sample images. With this function,

the time for parking target setting is reduced from 14 seconds to 5 seconds, and it

proves the improvement in user-friendliness of parking assist system.

2.3 Free Space-Based Methods

The method based on free space between parked vehicles can be divided into

seven categories according to the type of sensor and the signal processing technology

being used:

(1) Ultrasonic sensor-based method: It is the most common target position

designation method for parallel parking. The system collects range data as the

vehicle passes by a free parking space and then registers the range data using

odometry to construct the depth map of side region of the subjective vehicle. The

major difference between the sensors used and the ones employed in previous

parking system is the opening angle. The narrow characteristic of the sensors is

required to avoid cross-talk effects on any sensor, in the case of which the

sensors interfere with each other and thus produce unreliable measurement. To

Page 27: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

precisely measure the edges of the free parking space, Siemens developed a new

sensor with modified sensing area, that is, horizontally narrow and vertically

wide [16]. Linköping University and Toyota both utilized the correlation

between multiple range data sets, using multi-lateration [15], [28] and rotational

correction [27], respectively.

(2) Short range radar (SRR) sensor-based method: For parallel parking, the SRR

sensor was tested instead of the ultrasonic sensor [10]. If wide angle SRR, which

was usually applied in applications like blind spot detection or side crash

detection, is used, the azimuth measurement accuracy and resolution are quite

low and does not meet the requirements due to the small antenna size and a beam

width of 60 degree. A method improving angular accuracy with synthetic

aperture radar (SAR) algorithm was proposed [30].

(3) Single-image understanding-based method: It uses pattern recognition-based free

space detection; horizontal edge-based vehicle position detection was tried

experimentally [31]. However, it assumed that a vehicle edge had adequate

length depending on the distance and parked vehicles were passenger vehicle

with a fixed ratio.

(4) Motion stereo-based method: IMRA Europe developed a system that provides a

driver with a virtual image from the optimal viewpoint by intermediate view

reconstruction (IVR). The system reconstructs three-dimensional (3D)

information via odometry and using features tracked through consecutive images

captured while the subjective vehicle is moving [32]–[35]. The input signals of

the system are a CCD camera, ABS sensors and reverse gear. From these signals,

Page 28: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

10

the external parameters of the camera must be computed by odometry as fast as

possible. The relative displacement is effectively sufficient to determine external

camera calibration under the assumptions that the ground is plane; the height, tilt

angle and roll angle of camera are known and constant. The position estimation

is based on measured signals from both rear left and rear right ABS sensors.

(5) Binocular stereo vision-based method: The system of [37] reconstructs 3D

information by feature-based stereo matching and iterative closest point (ICP)

algorithm with respect to vehicle model. It then designates target position. Pose

estimation of objects is performed in two steps on this data: first, a template

matching algorithm extracts potential vehicles from a depth map. The 3D data is

in a second step segmented and planar surfaces modeling vehicles are fitted to it.

Four different models corresponding to the most common vehicle shapes have

been tested. The fitting of these surface models is based on an ICP algorithm.

This algorithm estimates the pose of the parking cars and a secondary probability

estimate allows the actual classification of vehicles.

(6) Light stripe projection-based method: The system recognizes 3D information by

analyzing the light stripe made by a light plane projector and reflected back from

objects. This approach was applied to parking assistance system by us originally

and there was no previous related works.

(7) Scanning laser radar-based method: Alexander Schanz et al. installed scanning

laser radar vertically on the side of the subjective vehicle; the radar then

collected range data while passing by free parking space. They proposed a

system that constructed the depth map by registering range data with odometry

Page 29: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

11

and then recognizing free parking space. In this case, the scanning laser radar

was used as a precise ultrasonic sensor with narrow field of view (FOV) [20],

[40], [41]. CyCab project installed scanning laser radar horizontally on the front-

end of the subjective vehicle; the radar recognized the locations of parked

vehicles. They utilized the vehicle locations for path planning and simultaneous

localization and mapping (SLAM) [42]. It is noteworthy that the CyCab used an

impractical assumption that vehicles in the car park belonged to the same vehicle

class.

2.4 Infrastructure-Based Methods

Tokyo University, Toshiba and Yazaki developed iCAN (Intelligent Car

Navigation System) and selected parking assist system as the first application topic

[11] [12]. The parking assistance system consists of parking administration, sensing,

and driver assistance systems. Parking administration system handles 1) parking lot

management, 2) parking spot assignment, and 3) transmission of parking data for

each vehicle. The sensing system provides real-time vehicle state (position, velocity,

and so on) estimates to the driver assistance system, which generates and displays

commands necessary for undertaking the parking process. Audiovisual interface was

developed to achieve the desired system performance. It comprises four modes;

Navigation mode performing rough guidance, handle operation mode, target position

mode, and gear change mode handling fine guidance.

Page 30: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

12

Chapter 3

GUI-Based Method

3.1 Introduction

In spite of the rapid progress of the automatic target position designation

method, the manual designation is supposed to have two important roles. First, the

manual designation can be used to refine the target position established by automatic

designation methods. In general, the parking system provides a rear view image to

help driver understand the on-going parking operation. Fig. 3.1 shows the typically

installed rear-view camera and user interface. Furthermore, the system needs to

receive driver’s confirmation about the automatically established target position. At

the moment, the driver is able to naturally refine the target position with the manual

designation method. Second, the manual designation is necessary for backing up the

automatic designation method. Because sensors used in the automatic designation

method have their own weakness, the recognition result cannot be always perfect. If

the system provides driver a chance to modify the target position by the manual

designation method, faults of the automatic designation method can be corrected

without serious inconvenience.

Page 31: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

13

(a) (b)

Fig. 3.1: Typical installation of camera and HMI; (a) rear view camera, (b) touch

screen based HMI.

This chapter explains a novel manual designation method to enhance driver’s

comfort by shortening the operation time and eliminating repetitive operation [22].

The basic idea is based on the drag and drop operation, which is familiar with PC

users. The target position is depicted as a rectangle in the touch screen based HMI.

The driver can move the rectangle by dragging the inside of rectangle and he/she can

rotate the rectangle by dragging the outside of rectangle.

3.2 Three Coordinate Systems

Proposed system compensates the fisheye lens distortion of input image and

constructs the bird’s eye view image using homography. The installed rear view

camera uses fisheye lens, or wide-angle lens, to cover wide Field Of View (FOV)

during a parking procedure. As shown in Fig. 3.2, the input image through fisheye

lens can capture a wide range of rear scenes but inevitably includes severe distortions.

It is well known that the major factor of the fisheye lens distortion is radial distortion,

Page 32: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

14

which is defined in terms of the distance from the image centre [52]. Modeling the

radial distortion in a 5th order polynomial using the Caltech calibration toolbox and

approximating its inverse mapping by a 5th order polynomial, the proposed system

acquires an undistorted image as shown in Fig. 3.2 [53], [54]. The homography,

which defines a one-to-one correspondence between the coordinate in undistorted

image and coordinate in bird’s eye view image, can be calculated from the height and

angle of camera with respect to the ground surface [38]. A bird’s eye view is the

virtual image taken from the sky assuming all objects are attached onto the ground

surface. The general pinhole camera model causes a perspective distortion, by which

the size of object image is changing according to the distance from the camera.

Contrarily, because a bird’s eye view image eliminates the perspective distortion of

objects attached onto the ground surface, it is suitable for the recognition of objects

painted on the ground surface. The final image of Fig. 3.2 is the bird’s eye view

image of the undistorted image.

Fig. 3.2: Construction procedure of bird’s eye view image.

Page 33: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

15

3.3 Drag and Drop Concept

The target position is a rectangle in the world coordinate system, or bird’s eye

view image coordinate system. The target position is managed by its 2D location

(Xw,Zw) and angle φ with respect to Xw-axis. The width and length of target position

rectangle are determined based on the ego-vehicle’s width and length. With the radial

distortion model and homography, a point in the bird’s eye view image coordinate

system is corresponding to a point in the distorted image coordinate system or input

image coordinate system. Therefore, by converting every coordinates into the bird’s

eye view image coordinate system, we can implement any kinds of operations in one

coordinate system uniformly. The target position rectangle and user input are treated

in the bird’s eye view image coordinate system, and then are converted to the proper

coordinate system according to the display mode.

The target position rectangle displayed in the touch screen based HMI acts as

a cursor while the driver is establishing the target position. The inside region of the

rectangular target position is used as a moving cursor. The driver can move the

location of target position by dragging the inside as shown in Fig. 3.3(a). The outside

region of the rectangular target position is used as a rotating cursor. The driver can

rotate the target position, or change the angle, by dragging the outside as shown in Fig.

3.3(b). Three kinds of operations are needed: 1) method determining whether a

driver’s input, i.e. pointing point, is in the target position rectangle or not, 2) with the

driver’s two consecutive inputs, calculation of translation transformation, and 3) with

driver’s two consecutive inputs, calculation of rotation transformation.

Page 34: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

16

(a)

(b)

Fig. 3.3: Target position rectangle as moving and rotating cursor; (a) moving by

dragging the inside of rectangle, (b) rotating by dragging the outside of rectangle.

3.4 Mode Selection

Whether a point is in a rectangle or not can be determined by checking if the

point is at the same side of four rectangle-sides in a rotating direction. In this

application, the relative location between four corner points cannot be determined

because the rectangle can be rotated. Only the order between four corner points is

fixed. C1, C2, C3, C4 are four corner points of a rectangle in a rotating direction and T

is a user’s pointing point. We can define a cross-product between two vectors, e.g.

C1C2 and C1T as depicted in Fig. 3.4. If all z-components of four cross-products have

Page 35: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

17

the same sign, then we can confirm that the point T is located in the rectangle as

shown in Fig. 3.5(a). Contrarily, if any z-components of four cross-products have a

different sign, then we can confirm that the point T is located out of the rectangle as

shown in Fig. 3.5(b).

Fig. 3.4: Cross-product between a rectangle-side and corner-pointing.

(a)

(b)

Fig. 3.5: Determining whether a point is in a rectangle or not; (a) four cross-products

have the same direction, (b) one cross-product has different direction.

Page 36: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

18

3.5 Translation and Rotation Operation

A translation transformation is equally applied to all points of target rectangle.

Therefore, a new target position can be determined by adding the difference vector

between two consecutive user input points, P1 and P2, to the current target position as

shown in Fig. 3.6(a).

A rotation transformation with respect to the centre point C is equally applied

to all points of the target rectangle. Therefore, a new target position can be

determined by rotating the current target position with respect to C by the between-

angle θ of two consecutive user input points as shown in Fig. 3.6(b).

(a) (b)

Fig. 3.6: Transformation calculation; (a) translation vector by difference vector, (b)

rotation angle by between-angle.

Page 37: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

19

Chapter 4

Parking Slot Markings Recognition-Based Method

4.1 Introduction

As parking assistant system is supposed to be used in urban area and usual

parking lot, such as in apartment and department store, is structured with parking slot

markings, the recognition of parking slot markings will be one of frequently used

target position designation methods. Jin Xu developed color vision based localization

of parking site marking, which uses color segmentation based on RCE neural network,

contour extraction based on least square method and inverse perspective

transformation [23]. Because the system depends only on parking slot marking, it can

be degraded by poor visual conditions such as stain on marking, shadow and

occlusion by adjacent vehicles.

This chapter explains two semi-automatic methods (one-touch type [25], two-

touch type [91]) and one full-automatic method [26]. They utilize user’s input, that is,

seed point, to simplify the recognition of parking slot markings. One-touch type is

designed for the rectangular parking slot markings and two-touch type is developed to

cope with various parking slot markings. Particularly, two-touch type can handle

parking slot markings without guideline, which separates parking slots from roadway.

The developed full-automatic method recognizes parking slot boundaries without help

of driver. However, driver should confirm the recognition result by clicking one of

recognized parking slots.

Page 38: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

20

Fig. 4.1 shows the system configuration of general parking assistant system

and one-touch type concept. When driver designates a seed-point inside target

parking-slot with touch screen, proposed method recognizes corresponding parking-

slot marking as target parking position. Proposed method is designed not only to

solve the discomfort of previous Prius’s fully manual designation method, but also to

eliminate the overweighed requirements of stereo vision based method, i.e. high-

performance hardware and enormous computing power. After the compensation of

fisheye lens distortion and the construction of bird’s eye view image, marking line-

segments crossed by the gaze from camera to seed point are detected. Guideline,

distinguishing parking-slots from roadway, can be easily detected by simply finding

the nearest among the detected line-segments. Consecutively, separating line-

segments are detected based on the detected guideline. Experimental results show that

proposed method is simple and robust to noise and illumination change.

Fig. 4.1: System configuration of semi-automatic parking system and one-touch type

concept.

Page 39: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

21

Fig. 4.2 shows two-touch type concept. The proposed method utilizes two

seed points, which are pointed out by the driver with touch screen. Because the

proposed method is based on two seed points, it is referred to as two-touch type. Two-

touch type has two contributions. 1) It does not need to assume that parking slots and

roadway are separated by a line. Therefore, it can be applied to various kinds of

parking slot markings. 2) As the target of recognition is pointed out directly, ROI can

be established narrowly. Therefore, computational load and memory requirement for

fisheye lens-related rectification and bird’s eye view image construction can be

reduced drastically.

Fig. 4.2: Two-touch type concept.

Proposed full-automatic method consists of six phases: construction of the

bird’s eye view edge image of input image captured with wide-angle lens, Hough

transform of edge image, marking line recognition by peak pair detection, marking

line-segment recognition, guideline recognition using modified distance between a

Page 40: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

22

point and a line-segment, and dividing marking line-segment recognition. The peak

pair detection uses an assumption that one marking line-segment becomes a parallel

line-segment pair distant by fixed width in edge image and it forms a characteristic

pattern in Hough space. One-dimensional filter incorporating such a priori knowledge

successfully detects marking line-segment. It shares many things with newly

developed approaches that focus on the geometrical structure of peaks in Hough

space [56]-[58]. Modified distance between a point and a line-segment is designed to

reflect the geometrical structure of parking slot markings. Experiments show that the

proposed method can successfully recognize parking slot markings in spite of severe

occlusion by adjacent vehicles.

Page 41: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

23

4.2 One Touch Type

Parking slot markings consist of one guideline and separating line-segments as

shown in Fig. 4.3(a). To recognize the parking-slot markings, marking line-segment

distinguishing parking slots from roadway should be recognized at first. Because the

line-segment is the reference of remaining recognition procedures, it is called

guideline. Each parking slot is distinguished by two line-segments perpendicular to

the guideline, which is called separating line-segment.

4.2.1 Marking Line-Segment Recognition by Directional Intensity Gradient

Proposed system recognizes marking line-segments using directional

intensity-gradient on a line lying from seed-point to camera. As shown in Fig. 4.3(a),

vector from seed-point to camera is represented by vseed point-camera and its unit vector is

represented by useed point-camera. Fig. 4.3(b) shows the intensity profile of pixels on the

line in the unit of pixel length s. If the start point ps and unit vector u are fixed, the

intensity of a pixel which is distant by s in the direction of u from ps, i.e. I(ps+s·u), is

represented by simple notation I(s). Because the line crosses two line-segments, it can

be observed that two intensity peaks with the width of ling-segment exist.

Equation (4.1) defines the directional intensity-gradient of a point p (x,y) with

respect to vector u, dI(p,u). Because camera maintains a certain height and angle with

respect to the ground surface, marking line-segment painted on the ground surface

will appear with a fixed width W. Therefore, directional intensity-gradient using the

average intensity of W/2 interval is robust to noise while detecting interesting edges.

Page 42: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

24

(a) (b)

(c) (d)

Fig. 4.3: Procedure of marking line-segment recognition; (a) vseed point-camera, (b)

intensity profile, (c) dI(s) and recognized edges, (d) recognized marking line-

segments.

2 2

1 12 2

1 1( , ) ( ) ( )

W W

W Wi i

dI I i I i= =

= − ⋅ − + ⋅∑ ∑p u p u p u (4.1)

Fig. 4.3(c) shows the profile of directional intensity gradient of the line with

respect to useed point-camera, i.e. dI(pseed point+s·useed point-camera, useed point-camera), which is

denoted by simple notation dI(s). Positive peaks correspond to the camera side edges

Page 43: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

25

of marking line-segments and negative peaks correspond to the seed point side edges.

Because camera side edge is easy to follow, it is recognized as the position of

marking line-segment. Threshold for positive peak detection, θpositive peak, is defined

adaptively like equation (4.2). Fig. 4.3(c) shows established threshold and recognized

positive peaks. Fig. 4.3(d) shows the recognized marking line-segments in bird’s eye

view image.

positive peak

1( ) avg ( )

3 s sI s I sθ = − max (4.2)

4.2.2 Recognition of Marking Line-segment Direction

Proposed system detects the direction of marking line-segments using the

directional intensity-gradient of local window and edge following based refinement.

Edge following results can eliminate falsely detected marking line-segments.

The directional intensity-gradient of a point displaced by (dx,dy) from a center

point pc (xc, yc) can be calculated by dI(pc+(dx,dy),u), which is denoted by simple

notation dI(dx,dy) if pc and u are fixed. Proposed system calculates the directional

intensity-gradient, dI(pcross+(dx,dy),useed point-camera), of (W+1) × (W+1) local window

around the detected cross-points pcross. Here, dx and dy are in the range of –W/2~W/2.

Fig. 4.4 shows the calculated dI(dx,dy) of local window around a cross-point. It can

be observed that dI(dx,dy) array forms a ridge, of which direction is the same as the

edge direction.

Page 44: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

26

Fig. 4.4: Directional intensity-gradient of local window around a cross-point.

To detect the direction of the ridge, proposed system introduces fitnessridge(φ),

which measures how well a line rotating by φ around the cross-point is similar to the

ridge direction like equation (4.3). As shown in Fig. 4.5(a), fitnessridge(φ) is the

difference between two line-sums in dI(dx,dy). These lines are orthogonal to each

other.

( ) ( )W W2 2

W W2 2

2 2i=- i=-

( )= cos( ), sin( ) cos( ), sin( )ridgefitness dI i i dI i iπ πφ φ φ φ φ⋅ ⋅ − ⋅ + ⋅ +∑ ∑ (4.3)

Fig. 4.5(b) shows calculated fitnessridge(φ) in the range of 0-180° and it can be

approximated by a cosine function whose frequency f0 is 1/180° like equation (4.4).

To eliminate the effect of noise, estimated phase parameter is used to estimate the

ridge direction like equation (4.5). In general, amplitude and phase parameter can be

estimated by MLE (Maximum Likelihood Estimation) [55]. Estimated cosine

function in Fig. 4.5(b) shows that the maximum value of fitnessridge(φ) can be robustly

detected.

[ ] ( ) [ ]0

where, : integer index of , [ ] : white gausian noise

cos 2ridge

n w n

fitness n A f n w n

φ

π ψ= ⋅ + + (4.4)

Page 45: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

27

[ ]

[ ]

0

179

01 0

179

00

ˆ

2

sin(2 )ˆwhere, tan

cos(2 )

ridge

ridgen

ridgen

f

fitness n f n

fitness n f n

ψφπ

πψ

π

− =

=

= −

− ⋅ = ⋅

∑∑

(4.5)

(a) (b) Fig. 4.5: Estimation of line-segment direction by model based fitness estimation; (a)

dI(dx,dy) in 3D display, (b) estimated maximum value of fitnessridge(φ).

Edge following, starting from the detected cross-point in the estimated edge

direction, refines the edge direction and eliminates falsely detected cross-points. Edge

position estimate of n+1 step can be calculated by cross-point pedge[0] and edge

direction of n step uedge[n] like equation (4.6). Finding maximum of local directional

intensity-gradient dI(t), defined like equation (4.7), updates the edge position of n+1

step like equation (4.8). nedge[n] is the unit vector normal to uedge[n] and tmax[n] is the

relative position maximizing dI(t) in nedge[n] direction as shown in Fig. 4.6(a).

Page 46: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

28

Iterating edge following terminates if new edge strength dI(tmax[n+1]) is definitely

smaller than the edge strength of cross point dI(tmax[0]), e.g. 70%. Proposed system

rejects detected cross-points of which successful edge following iteration is smaller

than a certain threshold θedge following to eliminate falsely detected marking line-

segments. Consequently, refined edge direction uedge[n+1] is set to a unit vector from

pedge[0] to pedge[n+1]. Fig. 4.6(b) shows the edge following results and refined

direction.

ˆ [ 1] [0] ( 1) [ ]n n ds n+ = + + ⋅ ⋅edge edge edgep p u (4.6)

( ) ( )

2 2

ˆ [ 1] [ ], [ ]

where, : ~W W

dI t dI n t n n

t

= + + ⋅

−edge edge edgep n n

(4.7)

maxˆ[ 1] [ 1] [ 1] [ ]n n t n n+ = + + + ⋅edge edge edgep p n (4.8)

(a) (b)

Fig. 4.6: Edge following refines the direction of detected marking line-segments; (a)

edge following method, (b) edge following results.

4.2.3 Determination of Guideline

If the seed-point designated by driver is locating in a valid parking-slot, gaze-

line from seed-point to camera should meet marking line-segments more than once

Page 47: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

29

making corresponding cross-points. Among marking line-segments validated by the

edge following, guideline is the marking line-segment of which cross-point is nearest

to the camera position. In other words, guideline has smallest distance between cross-

point and camera, i.e. pcamera-pedge[0]. Fig. 4.7 shows recognized guideline.

Fig. 4.7: Guideline recognized by structural relation between cross-points.

4.2.4 Determination of Separating Line-segments

By searching separating line-segments bi-directionally from selection point

pselection that is obtained by projecting the seed-point onto the guideline, proposed

system recognizes the exact location of target parking slot. pselection is calculated by

cross-point pcross and guideline unit vector uguideline like equation (4.9).

( )( ) = + ⋅ −selection cross guideline seed point cross guidelinep p u p p u (4.9)

( )2

2

1

0

( )W

Wont

I s I s t=

= + ⋅ + ⋅∑ selection searching guidelinep u n (4.10)

( )2 3

2

21( ) W

W

offt W

I s I s t=

= + ⋅ + ⋅∑ selection searching guidelinep u n (4.11)

Page 48: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

30

(a)

(b) (c)

Fig. 4.8: Measuring Ion(s) and Ioff(s) bi-directionally to find both-side ‘T’-shape

junctions; (a) mask for Ion(s) and Ioff(s), (b) in uguideline direction, (c) in –uguideline

direction.

Searching ‘T’ -shape junction between guideline and separating line-segment

can detect the position of the separating line-segment. Average intensity on the

guideline marking, Ion(s), is measured like equation (4.10) and the average intensity of

neighboring region outward from camera, Ioff(s), is measured like equation (4.11).

Here, usearching is either uguideline or – uguideline according to the search direction. Fig.

Page 49: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

31

4.8(a) depicts the procedure of measuring Ion(s) and Ioff(s). Fig. 4.8(b) and (c) shows

the measured Ion(s) and Ioff(s) in both directions. It can be observed that Ioff(s) is

similar to Ion(s) only around ‘T’-shape junction. Therefore, the location of junction

can be detected by thresholding the ratio of Ioff(s) to Ion(s), named Lseparating(s). Fig.

4.9(a) and (b) shows detected junction and Fig. 4.9(c) shows recognized target

parking slot.

(a) (b)

(c)

Fig. 4.9: Lseparating(s) can find separating line-segments irrespective of local intensity

variation; (a) in uguideline direction, (b) in –uguideline direction, (c) recognized target.

Page 50: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

32

4.3 Two Touch Type

4.3.1 System Structure

Tow-touch type method is performed following the sequence as shown in

Fig.4.10. ‘Mode Selection’ denotes the communication between the system and the

driver. With the communication through touch screen-based HMI, the driver can

notify the system what kind of parking operation is required. In this phase, the driver

can set parallel/perpendicular selection and rectangular/11-shape selection. This

section deals with two types of parking slot marking to show the feasibility of the

proposed method: rectangular type and 11-shape type.

Fig. 4.10: The flow chart of two-touch type method.

Page 51: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

33

The system captures a rearview image with rearward camera installed at the

backend of the subjective vehicle, and then displays the image through touch screen-

based HMI. The driver designates target parking position by pointing out two seed-

points with the touch screen-based HMI.

After applying fisheye lens-related rectification and bird’s eye view image

transformation to the region around each seed-point, target pattern is searched. The

target pattern denotes a particular pattern of parking slot marking that is supposed to

exist around a seed-point. If the parking slot marking is rectangular type, then the

target pattern is T-shape pattern that is the intersection between parking slot marking

line-segments. If the parking slot marking is 11-shape type, then the target pattern is

Π–shape pattern that is the ending of line-segment contour. In other words, target

pattern detection is implemented by template matching of skeleton and contour of

parking slot marking. If there is a target pattern certainly, target pattern searching is

equal to finding the placement of target pattern that minimizes matching error.

Once two target patterns are detected, the coordinates of the entrance of the

target parking position is fixed. Therefore, the coordinates of target parking position

can be established. Consequently, based on the coordinates of target parking position,

path can be planned.

4.3.2 Initial Direction Establishment

Two seed-points, that the driver points out to designate the end-points of

separating marking line-segments, are expected to be similar to the entrance of target

parking position. Therefore, the longitudinal direction of target parking position can

Page 52: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

34

be initialized by the perpendicular direction of the line-segment connecting the seed-

points. Fig. 4.11 shows an example of line-segment connecting two seed-points and

initialized longitudinal direction of target parking position.

Fig. 4.11: The longitudinal direction of target parking position can be initialized with

two seed points.

4.3.3 Target Pattern Detection

1) Target Pattern for Each Marking Type

Once the longitudinal direction of target parking position is initialized, target

pattern is searched around each seed-point. By applying fisheye lens-related

rectification and bird’s eye view transformation to a certain region around each seed-

point, rectified image is constructed [38]. The seed-points are assumed to be located

on the ground surface. Therefore, with the homography between camera installed at

the backend of the subjective vehicle and the ground surface, bird’s eye view image

transformation can be performed. As only small region around seed-point, that is,

1m×1m, is transformed into bird’s eye view image and no interpolation is used,

memory consumption and computational load are restricted within a small amount.

Page 53: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

35

Fig. 4.12 shows the rectified images around the end-point of parking slot

marking line-segment of two types of pattern. Fig. 4.12(a) gives an example of T-

shape example. As a line separating parking area from roadway and a line-segment

separating parking slots are met perpendicularly, the region around the cross-point

can be modeled as a T-shape pattern. Fig. 4.12(b) shows the rectified image around

the end-point of parking slot marking line-segment of 11-shape type, which does not

have the line separating parking slots from roadway. In this case, if the central line of

marking line is used alone, the position of target pattern in longitudinal direction can

not be determined certainly. Therefore, in the case of 11-shape type parking slot

marking, Π-shape of the contour of the line-segment is selected as a target pattern.

(a) (b)

Fig. 4.12: Rectified image for two type parking slot markings; (a) rectified image in

the case of rectangular type, (b) rectified image in the case of 11-shape type.

2) Distance Transform-Based Target Pattern Detection

If it is assumed that there is one target pattern in a rectified image, then target

pattern detection is equal to a problem finding the optimal coordinates and orientation

of target pattern. Target pattern detection constructs the intensity histogram of

Page 54: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

36

rectified image, and then finds clusters in the histogram. If the number of the clusters

is fixed to a sufficiently large value, e.g. four in our case, then pixel values are

supposed to be over-segmented and the cluster with largest intensity value should

correspond to the region of marking. Although this marking segmentation is robust to

external disturbance such as asphalt texture pattern, it has one drawback that the

contour of marking region contains a lot of noise. Therefore, as target pattern

detection can not be implemented by syntactic approaches, proposed method uses

genetic algorithm (GA)-based optimization, which minimizes errors between the

distance transform of the rectified image and target pattern template.

3) T-Shape Target Pattern for Rectangular Type

In general, as parking slot marking is drawn with light color on dark colored-

ground surface, rectified image can be segmented into two kinds of region. However,

considering the effect of shadow beneath vehicles and asphalt texture pattern,

intensity histogram is over-clustered into four clusters and the brightest cluster is

regarded as pixels belonging to parking slot marking. Although over-clustering causes

noisy contour of parking slot marking, it can guarantee that the extracted pixels surely

belong to parking slot marking. In the case of T-shape target pattern, binary

morphological operation extracts skeleton from rectified image. When Fig. 4.13(a) is

a rectified image, (b) shows the segmentation result and (c) shows the extracted

marking region. Fig. 4.13(d) shows the extracted skeleton of (c).

Page 55: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

37

Fig. 4.13: T-shape target pattern detection.

After the skeleton is extracted, its distance transform is constructed as shown

in Fig. 4.13 (e). In the case of rectangular type, target pattern template consists of

three line-segments with length L as shown in Fig. 4.13(f). The error of a specific

hypothesis, that is central coordinates and orientation, is defined as the summation of

distance values that are sampled along the template line-segment by a specified

interval when a target pattern template is overlapped onto the distance map according

to the hypothesis.

Target pattern detection is an optimization problem minimizing the error of

the placement of target pattern template, (x, y, θ). (x, y) denotes the central

coordinates of target pattern template and θ denotes the orientation of target pattern

template. The proposed method solves this problem by genetic algorithm (GA). An

individual consists of three genes corresponding to the placement parameters of target

Page 56: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

38

pattern template and the fitness function is defined as the error of the placement with

respect to skeleton’s distance transform. To enhance the efficiency of GA, θ is

restricted to a range similar to the initial longitudinal direction of target parking

position and the central coordinates of target pattern template is initialized with the

cross-points of the skeleton. In Fig. 4.13(f), cross-points of the skeleton are depicted

by ‘*’ marking. GA is performed with population size 200 and maximum generation

100. Fig. 4.13(g) and (h) show the detected T-shape target pattern respectively on the

distance map and the rectified image.

4) Π-Shape Target Pattern for 11-Shape Type

Like the T-shape target pattern detection, four clusters are detected in the

intensity histogram of the rectified image as shown in Fig. 4.14(b) and then pixels

belonging to the brightest cluster are regarded as parking slot marking as shown in

Fig. 4.14(c). In the case of Π-shape target pattern detection, the contour of the

detected parking slot marking is extracted by morphological operation as shown in

Fig. 4.14(d). In this case, target pattern is located at the ending of the contour.

Page 57: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

39

Fig. 4.14: Π-shape target pattern detection.

In the case of Π-shape target pattern, detecting the patterns has a difficulty

since the width of parking slot marking is variable. Especially, the position in the

longitudinal direction is hard to be determined. To complement this difficulty,

modified distance map is introduced, which considers two aspects of the line

segment: contour and end-point. The modified distance map is defined as the product

of contour’s distance map and end-point’s distance map spatially limited within

detected parking slot marking. Fig. 4.14(e) is the distance map of the contour and (f)

is the distance map with respect to the end-point of parking slot marking. Fig. 1.14(g)

Page 58: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

40

is the spatially limited distance map with respect to the end-point and (h) is the

modified distance map. It can be noteworthy that the end-point of marking region has

smaller value than the remaining parking slot marking in the modified distance map.

As the width of parking slot marking is not fixed, target pattern consists of

three line-segments: two parallel line-segments with length L and one line-segment

with length w connecting the two parallel line-segment’s end-points perpendicularly.

Then, a hypothesis consists of central coordinates, orientation and width. The error

corresponding to the hypothesis is the summation of modified distance values when a

target pattern template is overlapped on the modified distance map according to the

hypothesis as shown in Fig. 4.14(i). Similar to the T-shape target pattern, Π –shape

target pattern is detected by GA with the same configuration. Fig. 4.14(j) and (k)

show the detection result. Although the orientation of target pattern is wrong slightly,

this can be compensated during target parking position establishment phase.

4.3.4 Target Parking Position Establishment

If target patterns on two separating line-segments are recognized, target

parking position can be established. Fig. 4.15(a) shows two seed-points inputted by

the driver with touch screen and detection result of target patterns. Since the size of

the touch screen is small and the driver’s posture is limited, it is impossible to

designate seed-points exactly on the target pattern. However, as the proposed method

searches target patterns in the region around the seed-points, target patterns can be

designated in spite of noisy driver’s input. In Fig. 4.15(a), it is noteworthy that

target patterns are exactly recognized even though the driver’s seed-points are distant

Page 59: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

41

from the target. Therefore, the proposed method can help the driver designate target

pattern without excessive stress. This is the main difference of the proposed method

from the fully manual designation methods.

The target parking position can be established such that one longitudinal side

of rectangular target position is on the line connecting the central points of two

recognized target patterns and the center of target position is equally distant from the

two target patterns. The rectangle of target parking position has the same length and

width as the subjective vehicle. Even when the detected orientation of each target

pattern contains some error, target parking position establishment method using two

recognized target pattern, located at two end of entrance of parking slot, makes the

position more accurate and robust. Fig. 4.15(b) shows the consequently established

target parking position.

(a) (b)

Fig. 4.15: Target pattern detection result and target parking position establishment

result.

Page 60: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

42

4.4 Full-automatic Markings Recognition-Based Method

4.4.1 Bird’s Eye View Edge Image

Edge image is generated from bird’s eye view image using Sobel edge

detector. Each pixel of the edge image is the summation of Sobel horizontal mask

application and Sobel vertical mask application like (4.12). In (4.12), E(x,y) denotes a

pixel value of the edge image and B(x,y) denotes a pixel value of the bird’s eye view

image. Resultant edge image is a binary image acquired by applying a certain

threshold to the calculated E(x,y). Fig. 4.16 shows a bird’s eye view image and

resultant edge image.

1 1 1 1

1 1 1 1

( , ) ( , ) ( , ) ( , ) ( , )vertical horizontali j i j

E x y Sobel i j B x i y j Sobel i j B x i y j=− =− =− =−

= ⋅ + + + ⋅ + +∑∑ ∑∑ (4.12)

Fig. 4.16: Bird’s eye view image and edge image.

4.4.2 Hough Transform of Edge Image

Parking slot markings consists of several marking line-segments, each of

which appears as a parallel line-segment pair with fixed distance in the edge image.

The line-segment pair is corresponding to the both side borders of a marking line-

segment. It is noticeable that a parallel line-segment pair with fixed distance forms a

Page 61: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

43

characteristic pattern in Hough space. In other words, two peaks in Hough space

corresponding to the line-segment pair are supposed to have almost the same

coordinates in orientation axis and be distant from each other with the fixed distance

in the distance axis of Hough space. Furthermore, two peaks are supposed to have the

same height.

Hough Transform is one of the most popular methods for the detection of line

in binary image. A pixel (x,y) in binary image is transformed into a set of parameters

(θ,d), each of which is corresponding to a line passing the pixel in binary image. If all

pixels in a binary image are transformed into corresponding parameter sets and

contributions of the parameter sets are accumulated in Hough space, a line in binary

image forms a peak in Hough space. Therefore, peak detection in Hough space can

recognize lines in binary image [59].

In general, Hough transform uses normal vector direction φ and distance ρ as

Hough space axes. In this research, for the sake of easier interface with other

components of vision system, somewhat different axes are used: orientation angle θ

instead of normal vector direction φ, signed distance d instead of positive distance ρ.

Thanks to the new axes, pairing two lines, having different signs of intersection,

become to have the same orientations with each other. It is contrast to the general

axes, with which pairing lines passing the upside and downside of the origin have

different normal vector direction with the displacement of π. With the new axes, the

distance between two lines can be calculated simply by subtracting their signed

distances d. In the case of the general axes, calculating distance between lines should

consider the range of normal vector directions. Equation (4.13) shows the definition

Page 62: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

44

of θ and d.

tan

cos

y a x b x b

d b

θθ

= ⋅ + = ⋅ += ⋅

(4.13)

If there are many lines in edge image, there will be corresponding peaks in

Hough space. Because the height of a peak in Hough space depends on the length of

corresponding line-segment, it is impossible to detect valid peaks with a fixed

threshold. Furthermore, because contributions from several lines are overlapped in

Hough space, interference between peaks is unavoidable [60]. To overcome the

difficulties, various methods have been developed. Among them, methods using

‘butterfly pattern’ provide consequent mathematical understanding about Hough

transform and show robust detection performance [59]. The expression ‘butterfly

pattern’ is from the fact that the distribution around a peak looks like a butterfly.

Using butterfly pattern, it is also possible to detect the start point, end point and

thickness of a line-segment. However, because analyzing distributions around all

peaks requires enormous computational load and time, such kind of methods are hard

to be applied to real-time applications.

To overcome the drawback of methods using butterfly pattern, distribution

pattern specified for parallel line-segment pair is devised. It is found that grasping the

characteristics of distribution pattern can reduce search range, consequently

eliminates the effect of noise and improves the computational speed. Fig. 4.17 shows

that a parallel line-segment pair with fixed distance W is transformed into

characteristic distribution pattern in Hough space. Two peaks corresponding to

respective line-segments have the same θ value and are distant by W in d axis.

Page 63: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

45

Furthermore, the heights of peaks are all the same and there is a deep valley between

the peaks.

(a) (b)

Fig. 4.17: Hough transform of a parallel line pair; (a) edge image, (b) Hough space.

4.4.3 Marking Line Recognition

1D (one-dimensional) filtering and clustering in Hough space are developed to

detect peak pairs, whose two peaks have the same θ value and fixed between-distance

W in d axis in Hough space. There is no general characteristic of the peak pair in θ

axis direction, because the wing-width and starting-point of butterfly pattern are

various depending on the location and length of line-segment. Therefore, 1D filtering

in d axis direction, which incorporates a priori knowledge about the peak pair in

Page 64: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

46

Hough space, detects candidates of peak pair. Consecutively, candidate clusters are

detected by morphological dilation and connected component searching. It is certified

that the centroid of each cluster is a good estimate of a marking line parameter

corresponding to the peak pair.

Equation (4.14) and Fig. 4.18 show the structure of designed 1D filter.

HS(θ,d) is the value of a Hough space element at (θ,d). The designed filter is based on

the fact that if the investigated coordinates in Hough space is (θ,d) and it is a valid

marking line-segment, HS(θ,d), HS(θ,d-W) and HS(θ,d+W) are supposed to be valleys,

simultaneously, HS(θ,d-W/2) and HS(θ,d+W/2) are supposed to be peaks. PA, PB

denotes respectively each value of two peakness-testing coordinates. To reduce the

effect of assumed width W and improve detection robustness, peakness-testing finds a

maximum value within a predefined range in d axis direction. S(θ,d) is the application

result of the designed filter. Only when S(θ,d) is positive, L(θ,d) is defined as a

likelihood ranging between 0 and 1. When the valley values are all zeros, S(θ,d) has

its maximum value, PA+PB. Therefore, Dividing S(θ,d) by PA+PB normalizes L(θ,d)

for it to be between 0 and 1. In addition, multiplying the ratio, minimum peak value

over maximum peak value, encourages the case when the two peaks have the same

heights. If L(θ,d) is greater than the threshold θdual peak, the coordinates (θ,d) in Hough

space becomes a candidate of marking line-segment.

Page 65: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

47

21~1

21~1

( , ) ( , ) ( , ) ( , )

max ( , )

max ( , )

min( , ) ( , ), ( , ) 0

max( , )( , )

0 , ( , ) 0

1 , ( , )( , )

0 , ( , )

A B

WA

i

WB

i

A B

A B A B

dual peak

dual peak

S d HS d W P HS d P HS d W

P HS d i

P HS d i

P P S dS d

P P P PL d

S d

L dC d

L d

θ θ θ θ

θ

θ

θ θθ

θ

θ θθ θ θ

=−

=−

= − − + − + − +

= − −

= + −

⋅ > += ≤

>= ≤

(4.14)

Fig. 4.18: Structure of 1D filter.

Although almost candidates belonging to a line-segment form a cluster in

Hough space, there exist some candidates that are not connected to the cluster

depending on the thickness and length of the line-segment. To compensate the

drawback of binarization, dilation using 5×5 rectangular kernel is used to robustly

detect candidate clusters. Fig. 4.19(a) shows detected candidate clusters and their

centroids. Fig. 4.19(b) and (c) show overlaid lines designated by detected centroids in

Page 66: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

48

edge image and undistorted image. It certifies that the designed 1D filter can

successfully detect peak pairs and the centroids of candidate clusters are good

estimates of line parameters.

(a)

(b) (c)

Fig. 4.19: Recognized marking lines; (a) detected peak pair, (b) in edge image, (c) in

undistorted image.

4.4.4 Line-segment Recognition

Marking line-segment is recognized as a section of detected marking line,

which satisfies the condition of marking line-segment. In this case, the priori

Page 67: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

49

knowledge used in peak pair detection is used again. In other words, a marking line-

segment consists of two parallel line-segments with fixed distance W in edge image.

Introduction of hysteresis in the procedure improves system’s robustness against

noise. To determine the start-point, likelihood should be greater than starting-

checking threshold continuously for W/2. Inversely, to determine the stop-point,

likelihood should be less than stopping-checking threshold for W/2. Therefore,

thresholding with hysteresis prevents possibilities that short-term variation of

likelihood disturbs the recognition of start-point and stop-point.

Fig. 4.20: Recognized marking line-segments.

4.4.5 Guideline Recognition

Among the recognized marking line-segments of parking slot markings,

guideline is selected using an assumption that it is near from the camera and likely to

be normal to the gaze direction. Guideline is the most important line-segment as the

reference of parking slot recognition, because it is the border between parking slots

and roadway.

Page 68: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

50

(a) (b)

Fig. 4.21: Distance between a point and a line-segment; (a) when PC is in the line-

segment, (b) when PC is out of the line-segment.

In accordance with the foot of a perpendicular, distance between a point and a

line-segment is defined as one of two values: distance between the point and a line

extending the line-segment, minimum of two distances between the point and the two

endpoints of the line-segment. In other words, if the foot of a perpendicular is in the

line-segment as shown in Fig. 4.21(a), distance between the point and the line-

segment is defined as distance between the point and the foot of a perpendicular. In

opposite case as shown in Fig. 4.21(b), minimum of two distances between the point

and two endpoints of line-segment is selected. The foot of a perpendicular PC (XC,YC)

is a cross-point of two lines: a line which is normal to the given line-segment S and

passes the given point P (XP,YP), a line (y=ax+b) extending the given line-segment S.

If the two endpoints of the line-segment is E1, E2, whether the foot of a perpendicular

is in the line-segment or not can be determined by the sign of the inner product of two

vectors PCE1, PCE2. Equation (4.15) shows the equations of the extending line and

normal line. Equation (4.16) shows the equation of cross point PC as the solution of

above two line equations. Equation (4.17) defines distance between a point P and a

line-segment S. In this application, the given point P is always the camera position

Page 69: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

51

and two endpoints of the given line-segment S are previously defined Pstart and Pstop.

Pstart is nearer to the camera.

1 1 1( ) ( )P P P P

y a x b

y x x y y y x xa a a

= ⋅ + = − ⋅ + + ⇐ − = − − (4.15)

2 2

1( , ) ( ), ( )

1 1C C C P P P P

a a bP x y x y b x a y

a a a a = + + − + ⋅ + + + (4.16)

distance( , ) 0

distance( , )distance( , ) 0

⋅ < ⋅ >C start C stop

start C start C stop

P Pc P P P PP S

P P P P P P (4.17)

( ) ( )( )D( , ) distance( , ) , ,= × ⋅start start stopP S P S u P P u P P (4.18)

Fig. 4.22: Guideline is likely to be normal to the gaze.

Modified distance D(P,S) is defined reflecting an assumption that a guideline

tends to be normal to the gaze direction. Consequently, modified distance makes

guideline detection reliable irrespective of unstable location of start-point Pstart. How

well a marking line-segment is normal to the gaze direction can be measured by the

Page 70: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

52

inner product of two unit vectors: a unit vector from the camera position P to the

start-point Pstart, u(P,Pstart) and a unit vector from the start-point Pstart to the stop-point

Pstop, u(Pstart,Pstop). Modified distance from a point P to a line-segment S, D(P,S), is

defined like equation (4.18) and a line-segment with minimum modified distance is

selected as a guideline. Fig. 4.22 shows the example. Although the start-point of line-

segment S2 is nearer than the start-point of guideline S1, because the direction of line-

segment S2 is similar to the gaze direction, S2 has greater modified distance than S1.

Fig. 4.23 shows the detected guideline.

Fig. 4.23: Recognized guideline.

4.4.6 Dividing Marking Line-Segment

Using the difference between the intensity of pixel on marking and the

intensity of pixel off marking along the guideline, marking line-segment dividing

parking slots can be detected. Parking slot markings consists of one guideline

separating parking slots from roadway and dividing marking line-segments normal to

the guideline. By detecting a position where the intensity difference becomes small

definitely, ‘T’-shape junctions between the guideline and dividing marking line-

Page 71: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

53

segments can be recognized successfully. Furthermore, because ‘T’-shape junction is

searched along a whole line extending the guideline, additional dividing line-

segments, which are not detected during peak pair detection because they are located

far and blurred, can be detected.

Fig. 4.24: Recognized dividing marking line-segments.

Page 72: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

54

Chapter 5

Free Space-Based Method

5.1 Binocular Stereo-Based Method

Nico Kaempchen developed stereo vision based pose estimation of parking

lots, which uses feature based stereo algorithm, template matching algorithm on a

depth map and 3D fitting to the planar surface model of vehicle by ICP (Iterative

Closest Point) algorithm [37]. The vision system uses the disparity of vehicles but

ignores all the information of parking site marking.

This section explains developed binocular stereo vision-based method to

localize free parking site for automatic parking system [38]. Proposed method is

based on feature based stereo matching and separates parking site marking by plane

surface constraint. The location of parking site is determined by template matching on

the bird’s eye view of parking site marking, which is generated by the inverse

perspective transformation on the separated parking site marking. Obstacle depth map,

which is generated by the disparity information of adjacent vehicles, can be used to

narrow the search range of parking site center and the orientation. Because the

template matching is fulfilled within a limited range, the speed of searching

effectively increases and the result of searching is robust to the noise including

previously mentioned poor visual conditions. Using both obstacle depth map and

parking site marking can be justified because typical parking site in urban area is

constructed by nation-wide standards.

Page 73: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

55

5.1.1 Stereo Vision System

Fig. 5.1 shows the stereo camera installed on test vehicle. Fig. 5.2 is the stereo

image of typical parking site, which is acquired with Point Grey Research’s

Bumblebee camera installed on the backend of test vehicle. Each image has 640×480

resolution and 24 bits color information. The images are rectified with Point Grey

Research’s Triclops rectification library [61]. In Fig. 5.2, it is observed that some

portion of parking site marking is occluded by adjacent vehicle and trash. Some

portion of parking site marking is invisible because of shadow.

Fig. 5.1: Stereo camera installed on test vehicle.

(a) (b)

Fig. 5.2: Stereo image; (a) left image, (b) right image.

Page 74: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

56

5.1.2 Pixel Classification

In the case of automotive vision, it is known that vertical edges are sufficient

to detect noticeable objects [62]. Consequently, stereo matching using only the

vertical edges drastically reduces the computational load [63], [64]. Pixel

classification investigates the intensity differences between a pixel and four directly

connected neighbors so as to assign the pixel a class reflecting the intensity

configuration. It is known that the feature based stereo matching with pixel class is

fast and robust to noise [64]. Equation (5.1) shows that a pixel of smooth surface will

be classified as zero class and a pixel of edge will be classified as non-zero class. To

reduce the effect of threshold T, histogram equalization or adaptive threshold can be

used.

4 neighbors

(5.1)

Pixel Class

Figure 5.3 shows the result of the pixel classification. 13.7% of total pixels are

classified as horizontal edge and 7.8% are classified as vertical edge.

1 , ( ) ( )

( ) 2 , ( ) ( )

0 ,

, ( ) :

if g i g x T

d i if g i g x T

else

where g a grey value

− > += − < −

Page 75: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

57

(a) (b)

Fig. 5.3: Pixel classification result; (a) horizontal edge, (b) vertical edge.

5.1.3 Feature Based Stereo Matching

Stereo matching is performed only on pixels classified as vertical edge.

Furthermore, stereo matching is composed of step-by-step test sequences through

class comparison, class similarity, color similarity and maximum similarity detection.

Only correspondence candidates passing the previous test step will be investigated in

the next test step.

Assuming that the vertical alignment of Bumblebee is correct, the search

range of a pixel is limited to a horizontal line with –35 ~ 35 displacement. First,

correspondence test is performed on pixels with the same class as the investigated

pixel. Class similarity defined by equation (5.2) is the measure of how the candidate

pixel is similar to the investigated pixel in the sense of 3×3 class window. Color

similarity defined by equation (5.3) is the measure of how the candidate pixel is

similar to the investigated pixel in the sense of 5×5 color window. Total similarity

defined by equation (5.4) is the product of the class similarity and the color similarity.

If highest total similarity is lower than a certain threshold, the investigated pixel fails

Page 76: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

58

to find corresponding point and is ignored.

1 1

-1 -1

1( , , ) ( ( , ), ( , ))

3 3 left rightu v

ClassSimilarity x y s f Class x u y v Class x u s y v= =

= + + + + +× ∑∑ (5.2)

0,

where, ( , )1,

left right

left rightleft right

Class Classf Class Class

Class Class

≠= =

1 ( , , )

( , , ) 1 - 256 5 5

ColorSSD x y sColorSimilarity x y s =

× (5.3)

2( ( , )- ( , ))2 2 2where, ( , , ) ( ( , )- ( , ))-2 -2

2( ( , )- ( , ))

R x u y v R x u s y vleft right

ColorSSD x y s G x u y v G x u s y vleft rightu v

B x u y v B x u s y vleft right

+ + + + + + ∑= + + + + + +∑ = = + + + + +

( , , ) ( , , ) ( , , )Similarity x y s ClassSimilarity x y s ColorSimilarity x y s= × (5.4)

Figure 5.4 shows the stereo matching result of a pixel. Graph on right image is

the total similarity of pixels within search range. The pixel with highest total

similarity is corresponding point.

(a) (b)

Fig. 5.4: Stereo matching result; (a) left image, (b) right image.

Page 77: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

59

5.1.4 Road/Object Separation

Generally, pixels on the road surface satisfy plane surface constraint, i.e. the y

coordinate of a pixel is in linear relationship with the disparity of the pixel, d(x,y),

like equation (5.5) [64]. Consecutively, the pixels of obstacles, e.g. adjacent vehicles,

do not follow the constraint. Therefore, the disparity map which is the result of stereo

matching can be separated into two disparity maps: the disparity map of parking site

marking and the disparity map of obstacle.

( , ) ( cos sin ), with tanx yy

B yd x y f y f

H fα α α= + ⟩ (5.5)

where, : baseline, : Height,

, : focal length, : tilt anglex y

B H

f f α

(a) (b)

Fig. 5.5: Road/object separation result; (a) obstacle disparity map, (b) parking site

marking disparity map.

The distance between camera and object, Zworld, is inverse proportional to the

disparity like equation (5.6). Previously mentioned plane surface constraint can be

simplified like equation (5.7). P1 and P2 is the constant parameter of camera

Page 78: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

60

configuration. Consequently, the relationship between the y coordinate of a pixel on

road surface and Zworld can be summarized like equation (5.7) and (5.9). The

relationship between Xworld and the x coordinate of a pixel can be defined like (5.10)

by triangulation.

( , )world

B fz

d x y

⋅= (5.6)

1 2

1 2

( , )

where, cos , sin x x

y y

d x y P y P

f fB BP P

H f H fα α

= +

= = (5.7)

1 2

world

B fz

P y P

⋅=⋅ +

(5.8)

21

1

world

B fy P

P z

⋅= − (5.9)

: : worldworld world

world

f XX Z x f x

Z

⋅= ⇒ = (5.10)

Using the relationship, the disparity map of parking site marking is

transformed into the bird’s eye view of parking site marking. The bird’s eye view is

constructed by copying values from the disparity map to the ROI (Region Of Interest)

of Xworld and Zworld. Pixels with different color from parking site marking are ignored

to remove the noise of textures such as asphalt and grass.

Page 79: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

61

Fig. 5.6: Bird’s eye view of parking site marking.

Obstacle depth map is constructed by projecting the disparity information of

pixels unsatisfying the plane surface constraint. World coordinate point (Xworld, Zworld)

corresponding to a pixel in the obstacle disparity map can be determined by equation

(5.6) and (5.10) [63]. Because the stereo matching does not implement sub-pixel

resolution for real time performance, a pixel in the disparity map contributes to a

vertical array in the depth map. The element of depth map accumulates the

contributions of corresponding disparity map pixels. By eliminating the elements of

depth map under a certain threshold, noise on the disparity map can be removed. In

general, the noise of the disparity map does not make a peak on the depth map.

Page 80: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

62

Fig. 5.7: Obstacle depth map.

5.1.5 Location of Parking Site

Free parking site is localized using both the depth map of obstacle and the

bird’s eye view of parking site marking. Localization algorithm consists of three

steps: finding guideline, obstacle histogram and template matching.

Guideline, which is the front line of parking area, is found by the Hough

transform of the bird’s eye view of parking site marking. The pose of ego-vehicle is

limited to –40~40 degrees with respect to the longitudinal direction of parking area.

Therefore, the peak of Hough transform in this angular range is the guideline as

Page 81: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

63

depicted in figure 5.8.

Fig. 5.8: Recognized guideline.

Obstacle histogram determines the free range of the guideline. Adjacent

vehicles are expected to be located in the direction orthogonal to the guideline

because parking area is divided in such a way. Therefore, accumulating the

occurrence of meaningful depth map points in that direction can separate occupied

parking site from free parking site. Obstacle histogram is implemented as an integer

array having the same size as the length of guideline. Inner product between the

guideline vector and the meaningful depth map point vector produces a scalar value,

which is used as the index of histogram array. Then, array element designated by the

index increases. Fig. 5.9 shows the resultant obstacle histogram. It can be observed

that free parking site has very low value in the obstacle histogram.

Free space is the continuous portion of the obstacle histogram under a certain

threshold and is determined by bidirectional search from the seed point. The search

range of parking site center in the guideline direction is central 20% of the free space.

The initial guess of parking site center in another direction, i.e. orthogonal to

guideline direction, is the position distant from the guideline by the half size of

Page 82: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

64

template length. The search range in the orthogonal direction is 10 pixels and the

angular search range is 10 degrees.

If one of aside parking sites is not occupied, free space will be too long

compared with the width of parking site template. In this case, free space is modified

to a range having the same length as the width of parking site template from the

detected obstacle.

Fig. 5.9: Obstacle histogram.

(a) (b)

Fig. 5.10: Seed point and search range; (a) seed point designated by driver, (b) search

range of parking site center.

Page 83: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

65

Final template matching uses a template consisting of 2 rectangles derived

from the standards about parking site drawing. The template matching measures how

many pixels of parking site marking exist between 2 rectangles, i.e. between inner

and outer rectangle. Fig. 5.11(a) shows the result on the bird’s eye view of parking

site marking and Fig. 5.11(b) projects the result on the bird’s eye view of input image.

Because the search range is narrowed by the obstacle depth map, template matching

successfully detects the correct position in spite of stain, blurring and shadow.

Furthermore, template matching, which is the bottleneck of localization process,

consumes little time. Total computational time on 1GHz PC is about 400~500 msec.

Once the initial position is detected successfully, the next scene needs only template

matching with little variation around the previous result.

(a) (b)

Fig. 5.11: Detected parking site; (a) result on parking site markings, (b) result on

input image.

Page 84: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

66

5.2 Light Stripe Projection-Based Method

Although some vision systems can find target position in outdoor situation

with bright sunlight, they suffer other kinds of problem in dark underground parking

site. Dark underground parking site is very common in urban life, e.g. parking site of

apartment, department store, and general downtown buildings. In dark parking site,

feature extraction, which is the basic initial operation of binocular stereo and motion

stereo, shows bad performance. Furthermore, near lighting bulbs make big moving

glints on vehicle’s surface, which give bad influence to stereo matching. Parking slot

marking recognition also suffers many problems from dark illumination condition and

reflective ground surface, which is general for indoor parking site.

This section explains developed light stripe projection (LSP)-based method to

provide a target position designation method in dark underground parking site [39].

By adding low-cost light plane projector, system can recognizes 3D information of

parking site. Although acquired 3D information is limited, we conclude that it is

sufficient to find free parking space. With various experiments, it is shown that the

proposed method is a good solution.

5.2.1 System Configuration

Proposed system can be implemented simply by installing a light plane

projector at the backend of vehicle as shown in Fig. 5.12. NIR (Near Infra-Red) line

laser is used as the light plane projector not to border neighboring people. Because

parking assist system should provide backward image to driver, band-pass filter,

Page 85: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

67

commonly used with NIR light source, can not be used. On the contrary, in order to

acquire image in NIR range, infrared cut filter generally attached to camera lens is

removed. To capture backward image with large FOV (Field Of View), fisheye lens is

used. With radial distortion parameters measured through calibration procedure, input

image is rectified to be undistorted image.

Fig. 5.12: System configuration.

In order to robustly detect light stripe without band-pass filter, difference

image between image with light projector on and image with light projector off is

used. Turning the light plane projector on during short period, system acquires

backward image including light stripe as shown in Fig. 5.13(a). Turning the light

plane projector off, system acquires visible band image as shown in Fig. 5.13(b). A

difference image between them can extract light stripe irrespective of surrounding

environment as shown in Fig. 5.13(c). Furthermore, because light plane projector is

turned on during only short time, it can guarantee eye-safe operation with

comparatively high power light source [65].

Page 86: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

68

(a) (b)

(c) (d)

Fig. 5.13: Rectified difference image; (a) input image with light stripe, (b) input

image without light stripe, (c) difference image, (d) rectified result of the difference

image.

5.2.2 Light Stripe Detection

Light stripe detection consists of two phases. The first phase sets threshold

adaptively with the intensity histogram of difference image and detects light stripe

pixels having definitely large value. The second phase detects additional light stripe

pixels with line detector, of which width is changed with respect to the corresponding

distance. In light stripe projection method, the distance of a pixel has one-to-one

relation with its y-coordinate value [66].

Page 87: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

69

Existing light stripe detection applications are applied to continuous object

surface with little distance variance in near distance. Furthermore, because existing

applications use infrared band pass filter to detect light stripe, they can ignore the

effect of surrounding environment and easily extract light stripe image. It can be

assumed that light stripe pixels on continuous surface are neighboring in consecutive

image columns. Because our application in our application handles searching

environment consisting of discontinuous surfaces with large distance difference, we

should devise new stripe detection algorithm.

According to the intensity histogram of difference image, Gaussian noise

generated from CMOS image sensor’s characteristics and pixels belonging to light

stripe exist in different ranges. It is found that if threshold is set to the six-sigma of

histogram, pixels with explicitly large value in the difference image can be easily

separated from Gaussian noise. Because light stripe is generated to be orthogonal to

camera X-axis, there is only one position corresponding to light stripe in a column. If

only one peak is detected in a column, the center of region above the threshold is

recognized as the position of light stripe.

The second phase uses peak detector with upstanding rectangular shape. By

analyzing the relation between the width and y-coordinate of detected light stripe, it is

found that light stripe in near distance generally has large width and light stripe in far

distance has small width. For each y-coordinate, the width of peak detector is changed

accordingly. The filter summarizes pixel values inside of the rectangle and subtracts

pixel values outside of the rectangle. Fig. 5.14 shows an example of detected light

stripe.

Page 88: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

70

Fig. 5.14: Detected light stripe.

5.2.3 Light Stripe Projection Method

Fig. 5.15: Normalization of configuration.

It is possible to assume that light stripe projector at location L is located at

virtual position L’ because light plane projector makes a light plan as shown in Fig.

5.15. Normalization of configuration calculates baseline b between camera and light

plane project L’ located on camera Y-axis and between-angle α. The height of camera

Page 89: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

71

from ground surface is c. The distance between camera and light plane projector in

the direction of vehicle axis is d. The height of light plane projector from ground

surface is h. The angle between camera axis and ground surface is γ. The angle

between light plane projector and ground surface is β. Therefore, α=90+β-γ.

According to the sine law, b is calculated as below:

( )

: tan( ) sin(90 ) : sin(90 )

sin(90 )tan( )

sin(90 )

b c h d

b c h d

β β β γβ β

β γ

− − ⋅ = − + −−= − − ⋅

+ −

Finding the intersecting point between plane ΠΠΠΠ and line Op can calculate the

coordinates of stripe pixel point P as sown in Fig. 5.16. ΠΠΠΠ denotes the plane generated

by light plane projector. Line laser projected onto object surface forms a light stripe. p

(x, y) denotes the point on image plane corresponding to a point P (X,Y,Z) on light

stripe.

Fig. 5.16: Relation between light plane and light stripe image.

1) The equation of light plane Π

Page 90: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

72

Light plane ΠΠΠΠ meets Y-axis at point P0 (0, -b, 0). The angle between the plane

and Y-axis is α and the angle between the plane and X-axis is ρ. Distance between

camera and P0, i.e. baseline, b and the between-angle α are calculated by

configuration normalization.

The normal vector n of light plane ΠΠΠΠ is calculated by rotating the normal

vector of XY plane, (0, 1, 0), by π/2-α with respect to X-axis and by ρ with respect to

Z-axis. The equation of plane can be obtained with the normal vector n in (5.11) and

one point on the plane P0 like (5.12).

sin sin

sin cos

cos

α ρα ρ

α

⋅ = ⋅ − n

(5.11)

( ) 0− =0n X P (5.12)

2) The equation of Op

Optical center O, one point on light stripe P, and corresponding point on

image plane p should be on one common line in three-dimensional space. With

perspective camera model denoted by (5.13), all points Q on the line Op can be

denoted by parameter k like (5.14). Here, f denotes focal length of optical system and

(x, y) is the coordinate of p.

X Z Y

x f y= =

(5.13)

( ), , k x k y k f= ⋅ ⋅ ⋅Q (5.14)

Page 91: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

73

3) 3D recognition from the intersecting-point

Because a point on light stripe P is the intersecting point of the light plane ΠΠΠΠ

and the line Op, it should satisfy (5.12) and (5.14) simultaneously. By substituting

(5.14) and (5.11) into (5.12), parameter k can be solved like (5.15).

( )tan cos

tan sin cos

bk

f x y

α ρα ρ ρ

⋅ ⋅=− ⋅ + ⋅

(5.15)

By substituting (5.15) into (5.14), the coordinate of P can be calculated like

(5.16), (5.17), and (5.18) [66].

( )tan cos

tan sin cos

x bX

f x y

α ρα ρ ρ⋅ ⋅ ⋅=

− ⋅ + ⋅ (5.16)

( )tan cos

tan sin cos

y bY

f x y

α ρα ρ ρ⋅ ⋅ ⋅=

− ⋅ + ⋅ (5.17)

( )tan cos

tan sin cos

f bZ

f x y

α ρα ρ ρ⋅ ⋅ ⋅=

− ⋅ + ⋅ (5.18)

4) Transform to vehicle coordinate system

Vehicle coordinate system is defined such that the foot of a perpendicular of

optical center O with respect to ground surface is the origin and XZ plane is parallel

to the ground surface. By rotating a point P in camera coordinate system by -γ with

respect to X-axis and translating it by camera height c in the direction of Y-axis, the

corresponding point P’ in vehicle coordinate system can be calculated like (5.19).

Page 92: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

74

1 0 0 0

= 0 cos sin

0 sin cos 0

X

Y c

Z

γ γγ γ

′ − + P

(5.19)

3D information of detected light stripe is calculated by (5.16) ~ (5.19). Fig.

5.17 shows the recognized 3D information by projecting them onto the XZ plane of

vehicle coordinate system.

Fig. 5.17: 3D recognition result.

5.2.4 Occlusion Detection

Occlusion is defined as a 3D point pair, which is located in adjacent column in

stripe image and is far more than a certain threshold, e.g. ego-vehicle’s length, from

Page 93: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

75

each other in XZ plane. Occlusions are detected by checking the above mentioned

conditions on detected light stripe. Two 3D points belonging to an occlusion have a

property about whether it is left-end or right-end of consecutive stripe pixels. Fig.

5.18 shows detected occlusion points. Occlusion point recognized as left-end is

denoted by ‘x’ marking and occlusion point recognized as right-end is denoted by ‘o’

marking. Here, left-end occlusion point and right-end occlusion point make one

occlusion. If the left-end occlusion point of an occlusion is nearer than the right-end

occlusion point from camera, the pair is supposed to be on the left-end of interesting

object. In this application, the interesting object means object existing on the left or

right side of free space. If the left-end occlusion point is further than the right-end

occlusion point from camera, the occlusion is supposed to be on the right-end of

interesting object. Such characteristic is attached to occlusion as directional

information. In Fig. 5.18, the occlusion at the left-end of interesting object is drawn

by red line and the occlusion at the right-end of interesting object is drawn by green

line.

Page 94: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

76

Fig. 5.18: Detected occlusion points and occlusion.

Fig. 5.19: Pivot detection result.

5.2.5 Pivot Detection

Pivot occlusion is defined as an occlusion which satisfies the below

conditions: 1) there is free space in the direction of occlusion. The region of free

space checking is semicircle, of which straight side is the line between occlusion

points and of which radius is ego-vehicle’s width and of which center is the nearer

occlusion point. 2) It is far from FOV border by sufficiently large distance, e.g. ego-

vehicle’s width. 3) It is nearer than any other candidates from optical axis. Pivot,

center of rotation, is the nearer occlusion point of pivot occlusion. Fig. 5.19 shows

recognized pivot with ‘+’ marking.

5.2.6 Recognition of Opposite-Side Reference Point

Page 95: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

77

It is assumed that recognized 3D points, whose distance from pivot is smaller

than a certain threshold, e.g. ego-vehicle’s length, in the direction of pivot occlusion,

belong to opposite-side object. Fig. 5.20 shows detected 3D points belonging to

opposite-side object. Among these 3D points, one point nearest to pivot becomes the

initial opposite-side reference point.

Fig. 5.20: Opposite-side reference point detection result.

Using points whose distance from opposite-side reference point is smaller

than a certain threshold, e.g. 2/3 of ego-vehicle’s length, in the direction going away

from camera and perpendicular to pivot, the direction of opposite-side object’s side

can be estimated. Fig. 5.20 shows the estimated direction of opposite-side object’s

side based on 3D points marked by yellow color. Estimated side of opposite-side

object is marked by blue line.

Opposite-side reference point is compensated by projecting pivot onto the

estimated side of opposite-side object. In Fig. 5.20, compensated opposite-side

Page 96: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

78

reference point is denoted by rectangular marking. When the left side object and the

right side object of free parking space has different distance from the border between

parking area and roadway, for example nearer side object is pillar and the opposite-

side object is a vehicle parked deeply, the initial opposite-side reference point will be

located at the position more inner than pivot. Consequently, if system establishes

target position with pivot and initial opposite-side reference point, the attitude of

target position has great risk to be misaligned.

Generally, stripe on the corner and side of opposite-side object is expected to

be robustly detected. Therefore, it is reasonable that target position can be more

correctly established with compensated opposite-side reference point, which can be

estimated with 3D points belonging to opposite-side object’s side.

5.2.7 Target Parking Position

Rectangular target position is established at the center between pivot and

opposite-side reference point. Target position’s short side is aligned to the line

between pivot and opposite-side reference point. The width of target position is the

width of ego-vehicle and the length of target position is the length of ego-vehicle. Fig.

5.21(a) shows the finally established target position. Fig. 5.21(b) shows the

established target position projected on input image.

Page 97: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

79

(a) (b)

Fig. 5.21: Established target position; (a) on the xz-plane, (b) on the rectified image.

Page 98: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

80

5.3 Motion Stereo-Based Method

AISIN SEIKI’s collaborator IMRA EUROPE S. A. proposed a system which

provides a rendered image from a virtual viewpoint for better understanding of

parking situations and procedures [32]-[35]. The system obtains camera external

parameters and metric information from odometry and reconstructs the 3D structure

of the parking space by using point correspondences. This approach can easily

reconstruct the Euclidean 3D structure by using odometry and a general configuration

of rearview camera can be used. However, odometry information can be erroneous

when road conditions are slippery due to rain or snow, and a free parking space

detection method was not presented.

This section explains developed motion stereo-based method which three-

dimensionally reconstructs the rearview structures by using a single rearview fisheye

camera and find free parking spaces in the 3D point clouds [36] [51]. This system

consists of six stages: image sequence acquisition, feature point tracking, 3D

reconstruction, 3D structure mosaicking, metric recovery, and free parking space

detection.

Compared to IMRA EUROPE S. A. [32]-[35], the proposed system makes

three contributions. First, the degradation of the 3D structure near the epipole is

solved by using de-rotation-based feature selection and 3D structure mosaicking. This

is a serious problem when reconstructing 3D structures with an automobile rearview

camera because the epipole is usually located on the image of an adjacent vehicle

which must be precisely reconstructed. Although this problem was mentioned in [34],

Page 99: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

81

[67], a solution was not presented. Second, an efficient method for detecting free

parking spaces in 3D point clouds is proposed. For this task, the structure dimensions

are reduced from 3D to 2D and the positions of adjacent vehicles are estimated. Third,

odometry is not used because its accuracy largely depends on road conditions. The

camera external parameters are estimated by using point correspondences and the

metric information is recovered from the camera height ratio.

Particularly, there has been some research into reconstructing 3D structures

with similar configurations in the field of SLAM [67]-[69]. These studies used a

single forward-looking wide angle camera for building maps and locating vehicles.

Odometry was not used in these studies but the 3D structures were reconstructed only

up to an unknown scale factor. The degradation near the epipole was mentioned but it

was not considered.

5.3.1 Point Correspondences

Point correspondences in two different images have to be found in order to

estimate the motion parameters and 3D structures. For this task, tracking-based

approach was selected in our application. For tracking, we chose the Lucas-Kanade

method [70], [71] because it produces accurate results, offers affordable

computational power [72], [73], and there are some existing examples of real-time

hardware implementations [74]-[76].

This method uses the least square solution of optical flows. If I and J are two

consecutive images and x and Ω denotes the feature position and the small spatial

neighborhood of x, respectively, then the goal is to find the optical flow vector, v

Page 100: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

82

which minimizes (5.20). The solution of (5.20), vopt is given by (5.21).

2min ( ) ( )I J

− +∑v

x Ωx x v

(5.20)

1

opt−=v G b

where, 2

2

, =

xx x y

yx y y

I II I I

I II I I

δδ∈ ∈

= ∑ ∑

x Ω x ΩG b

(5.21)

Ix and Iy are the image gradients in the horizontal and vertical directions,

respectively, and δI is the image pixel difference. Since the matrix G is required to be

non-singular, the image location where the minimum eigenvalue of G is larger than

the threshold is selected as a feature point and tracked through the image sequence.

5.3.2 3D Reconstruction

Once the point correspondences are obtained, the structure of the parking

space is three-dimensionally reconstructed by using the following three steps: key

frame selection, motion parameter estimation, and triangulation. First of all, the key

frames which determine the 3D reconstruction interval should be appropriately

selected. If there is not enough camera motion between the two frames, the motion

parameters is inaccurately estimated and in the opposite case, the number of point

correspondences is decreased.

Some algorithms have been proposed to select key frames [77]-[79]. We used

Page 101: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

83

a simple but less general method which uses the average length of optical flow. This

method works well because rotational motion is always induced by translational

motion in our application. Since parking spaces should be reconstructed at the

driver’s request, the last frame is selected as the first key frame. The second key

frame is selected when the average length of optical flow from the first key frame

exceeds the threshold. The next key frame is selected in the same way. The threshold

value was set to 50 pixels and this made the baseline length approximately 100~150

cm.

Once the key frames are selected, the fundamental matrix is estimated to

extract the motion parameters. For this task, we used the RANSAC followed by the

M-Estimator. Torr et al. [80] found this to be an empirically optimal combination.

The RANSAC is based on randomly selecting a set of points to compute the

candidates of the fundamental matrix by using a linear method. This method

calculates the number of inliers for each fundamental matrix and chooses the one

which maximizes it. Once the fundamental matrix is determined, it is refined by using

all the inliers. The M-Estimator reduces the effect of the outliers weighting the

residual of each point correspondence. If xi’ and xi are the coordinates of the point

correspondences in two images and F is the fundamental matrix, then the M-

Estimator is based on solving Eq. (5.22). wi is a weight function and we used Huber’s

[81] function.

T 2min ( ' )i i ii

w ⋅ ⋅∑F

x F x (5.22)

After estimating the fundamental matrix, we follow the method presented in

[82]. The essential matrix is calculated by using the fundamental matrix and the

Page 102: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

84

camera intrinsic parameters matrix. The camera intrinsic parameters were pre-

calibrated because they do not change in our application. The four combinations of

the rotation matrix and the translation vector are extracted from the essential matrix.

Since only the correct combination allows the 3D points to be located in front of both

cameras, randomly selected several points are reconstructed to determine the correct

combination. After that, the 3D points are calculated by using a linear triangulation

method. If P and P’ represent the projection matrices of the two cameras and X

represents the 3D point of the point correspondence, x and x’, the relation between x

and X and relation between x’ and X are expressed in (5.23).

( )( )' '

× ⋅ =

× ⋅ =

x P X 0

x P X 0 (5.23)

By combining the above two equations into the form AX=0, the 3D point X is

simply calculated by finding the unit singular vector corresponding to the smallest

singular value of A. This is solved by using a SVD. The matrix A is expressed as in

(5.24).

3 1

3 2

3 1

3 2

' ' '

' ' '

T T

T T

T T

T T

x

y

x

y

− − = − −

p p

p pA

p p

p p

(5.24)

piT and p’ iT represents the i-th rows of P and P’, respectively, and [x,y]T and

[x’,y’] T represents the image coordinates of the point correspondences. For 3D

reconstruction, we did not use a complex optimization algorithm such as a bundle

adjustment [83] because its computational cost is too high for our application.

Page 103: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

85

5.3.3 Degradation of 3D Structure near the Epipole

When reconstructing 3D structures in our application, heavy degradation

appears near the epipole. This is because triangulation has to be performed at a small

angle in that area. With a small angle, the accuracy of the 3D points is degraded

because of the relatively high portions of the point detection error and the image

quantization error. This can be shown as a rank deficiency of the matrix A of (5.24).

The projection matrices of the two cameras, P and P’, can be written as in (5.25).

[ | ] [ | ]

' [ | ] [ | ]

= == =

P K I 0 K 0

P K R t KR e (5.25)

K and I represent a 3×3 camera intrinsic parameters matrix and a 3×3 identity

matrix, respectively, and R and t represent a 3×3 rotation matrix and a 3×1 translation

vector. e is the epipole. Since the last column of P’ represents the epipole, the last

column of A becomes closer to a zero vector when the feature point ([x’,y’] T) nears

the epipole. This causes unreliable estimation of the 3D points.

Even though this problem is very serious in 3D reconstruction as mentioned in

[78], [40], it has not been dealt with in previous works because of two reasons. First,

the epipole is not located inside the image in many applications because of camera

configurations. This happens when the 3D structures are reconstructed by using a

stereo camera or a single moving camera whose translation in the optical axis is not

dominant than the translations in the other axes [84], [37]. Second, the epipole is

located inside the image but it is not on the target objects. This happens when a single

forward (or backward) looking camera moves along a road or corridor. In this case,

Page 104: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

86

the epipole is located inside the image but it is usually on objects far from the camera,

so the region around the epipole is not interesting [68], [69], [78].

Previous camera

Current camera

Free parking space

Previous camera

Current camera

Free parking space

EpipoleCamera centerImage plane

EpipoleCamera centerImage plane

Fig. 5.22: Location of the epipole in a typical parking situation.

In our application, the translation in the optical axis is quite dominant. So, the

epipole is always located inside the image. Also, the epipole is usually located on the

image of an adjacent vehicle which is our target object used for locating free parking

spaces. Fig. 5.22 shows the epipole location in a typical parking situation. As shown

in this figure, the epipole is usually located on the image of an adjacent vehicle due to

the motion characteristics of the automobile rearview camera.

For this reason, the 3D structure of the adjacent vehicle is erroneously

reconstructed in our application. Fig. 5.23 shows the location of the epipole in the last

frame of the image sequence and its reconstructed 3D structure. We depict the

structure as seen from the top after removing the points near the ground plane. In this

figure, the 3D points near the epipole on the adjacent vehicle appear quite erroneous,

Page 105: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

87

so the free parking space detection results will be degraded by those points.

Structure near the epipole

(a) (b)

Fig. 5.23: 3D error near the epipole; (a) A typical location of the epipole, (b)

Obstacles’ reconstructed 3D structure as seen from the top.

5.3.4 De-rotation-Based Feature Selection and 3D Structure Mosaicking

To solve the problem of the epipole and obtain a precise 3D rearview structure,

we propose a two-step method. First the unreliable point correspondences are

removed by using a de-rotation-based method, and then the removed part of the

structure is substituted by mosaicking several 3D structures. In the first step, we

eliminate the rotational effect from the optical flow. Since the optical flow length is

proportional to the 3D point accuracy in a pure translation [85], we simply throw

away the point correspondences whose optical flow lengths are shorter than the

threshold. This prevents the 3D structure from including erroneously reconstructed

points. For eliminating the rotational effect, a conjugate rotation is used [82]. If x and

x’ are the images of a 3D point (X) before and after the pure rotation, their relation

can be expressed by H as in (5.26).

Page 106: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

88

-1

[ | ]

' [ | ]

== =

x K I 0 X

x K R 0 X KRK x (5.26)

so that x’=Hx with H=KRK -1.

Fig. 5.24 describes the de-rotation-based feature selection procedure. The

optical flows found in the fisheye images are undistorted as shown in Fig. 5.24(a).

After that, the undistorted optical flows are de-rotated by using a conjugate rotation as

shown in Fig. 5.24(b). All the optical flows in Fig. 5.24(b) point toward the epipole

because the rotational effect is totally eliminated. In this case, the epipole is known as

the focus of expansion. In Fig. 5.24(b), the red lines indicate the unreliable optical

flows classified by the de-rotation-based method. The unreliable optical flows include

the features near the epipole and far from the camera. The threshold for the optical

flow length was set to 10 pixels.

(a) (b)

Fig. 5.24: De-rotation-based feature selection; (a) undistorted optical flows, (b) de-

rotated optical flows.

In the second step, we reconstruct several 3D structures by using the reliable

point correspondences and mosaick them into one structure by estimating the

similarity transformation. This process substitutes the removed part of the rearview

structure. The similarity transformation parameters consist of R (3×3 rotation matrix),

Page 107: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

89

t (3×1 translation vector), and c (scaling) and we use the least-square fitting method

[86] with the 3D point correspondences known from the tracking results. Since the

reconstructed 3D points may be erroneous and include outliers, the RANSAC

approach [87] is used for parameter estimation. The least-square fitting method can be

explained as follows. We are given two sets of 3D point correspondences X i and Y i;

i=1, 2, ···, n in the 3D space. X i and Y i are considered as 3x1 column vectors, and n

is equal to or larger than 3. The relationship between X i and Y i can be described by

(5.27). The mean squared error of two sets of points can be defined by (5.28).

i ic= +Y RX t (5.27)

22

1

1( , , ) ( )

n

i ii

e c cn =

= − +∑R t Y RX t (5.28)

If A and B are the 3×n matrices of X1, X2, … , Xn and Y1, Y2, … Yn,

respectively, and UDVT is a SVD of ABT (UUT=VVT=I , D=diag(di), d1≥d2≥…≥0), the

transformation parameters which minimize the mean squared error can be calculated

by (5.29). µY, µX, and σX2 is defined by (5.30), respectively.

2

1, , trace( )T

Y c cσ

= = − =XX

R UV t µ Rµ D (5.29)

22

1 1 1

1 1 1, ,

n n n

i i i Xi i in n n

σ= = =

= = = −∑ ∑ ∑Y X Xµ Y µ X X µ (5.30)

Fig. 5.25 shows the key frame images and the reconstructed 3D structures

when using and without using the de-rotation-based feature selection. The 3D

structures are shown as seen from the top after removing the points near the ground

plane. Fig. 5.25(a) shows the key frame images and their epipole locations. We can

Page 108: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

90

see that the epipoles are located on different positions of the adjacent vehicle. Fig.

5.25(b) shows the reconstructed 3D structures of each key frame without using the

de-rotation-based feature selection. The structures near the epipoles are badly

reconstructed. However, the erroneously reconstructed part in one structure is

correctly reconstructed in another structure. Fig. 5.25(c) shows the reconstructed 3D

structures of each key frame when using the de-rotation-based feature selection. Most

of the erroneous 3D points in Fig. 5.25(b) are deleted.

(a) (b) (c)

Fig. 5.25: Epipole locations; (a) Key frame images, (b) 3D structures without using

the de-rotation-based feature selection, (c) 3D structures when using the de-rotation-

based feature selection.

Page 109: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

91

Fig. 5.26 shows the reconstructed structures when using and without using the

proposed feature selection and 3D mosaicking methods. The red point indicates the

camera center. By using the proposed two-step method, we obtained more precise

structure near the epipole. In the experimental results, the advantages of this method

are presented in detail by comparing the reconstructed structures with the laser

scanner data.

(a) (b)

Fig. 5.26: Effect of feature selection and 3D mosaicking methods; (a) when using the

proposed two-step method, (b) without using the proposed two-step method.

5.3.5 Metric Recovery

For locating free parking spaces in terms of centimeters, the metric

information of the 3D structure has to be recovered. This is usually achieved by using

a known baseline length or prior knowledge of the 3D structure. Since the camera

height in the real world is known in our application, we estimate the camera height in

the reconstructed world and use the ratio for metric recovery. The camera height in

the real world is assumed as fixed. The height sensor can be used with camera height

variations that may occur due to changing cargos or passengers.

Page 110: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

92

To calculate the camera height ratio, the ground plane in the reconstructed

world has to be estimated because the camera location is set to the origin. The

estimation procedure consists of three steps: tilting angle compensation, density

estimation-based ground plane detection, and 3D plane estimation-based ground

plane refinement. The tilting angle is calculated and the 3D structure is rotated

according to the calculated angle. This procedure forces the ground plane parallel to

the XZ-plane. In our camera configuration (shown in Fig. 5.27), the tilting angle θ

can be calculated by (5.31) [88].

0arctan( )xe y

fθ −= (5.31)

ex and y0 are the y-axis coordinates of the epipole and the principal point,

respectively. f is the focal length of the camera.

Ground plane

Image plane

Epipole (ex, ey)

θθθθ

Y

Z

Principal point (x0, y0)

fOptical axis

Automobile

Fig. 5.27: Configuration of rearview camera.

Since there is usually only one plane (the ground plane) parallel to the XZ-

Page 111: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

93

plane after compensating the tilting angle, the density of the Y-axis coordinate of the

3D points has the maximum peak at the location of the ground plane. Fig. 5.28 shows

the density of the Y-axis coordinate of the 3D points. In this figure, the peak location

is recognized as the location of the ground plane and the distance from the peak

location to the origin is recognized as the camera height in the 3D structure.

After that, the location and the orientation of the ground plane are refined by

3D plane estimation. The 3D points near the initially detected ground plane are

selected and the RANSAC approach is used for estimating the 3D plane. The camera

height is refined by calculating the perpendicular distance between the camera center

and the estimated 3D plane. The 3D structure is scaled in centimeters by using the

camera height ratio between the estimated camera height and the known camera

height. After that, the 3D points far from the camera center are deleted and the

remaining points are rotated according to the 3D plane orientation to make the ground

plane parallel to the XZ-plane. Fig. 5.29(a) and (b) shows the final result of the metric

recovery in the camera-view and the top-view, respectively.

-3 -2.5 -2 -1.5 -1 -0.5 00

0.5

1

1.5

2

2.5

3

3.5

4

Y-axis coordinate

Den

sity

val

ue

Ground plane location Camera height

Fig. 5.28: Density of the Y-axis coordinates of the 3D points.

Page 112: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

94

-800 -600 -400 -200 0 200 400 600 800 1000

-100

0

100

200

300

X (cm)

Y (

cm)

-800 -600 -400 -200 0 200 400 600 800 1000-200

0

200

400

600

800

1000

X (cm)

Z (

cm)

(a) (b)

Fig. 5.29: Result of metric recovery; (a) Camera-view, (b) Top-view, (0, 0, 0)

indicates the camera center.

5.3.6 Free Parking Space Detection

Once the Euclidean 3D structure is reconstructed, the free parking spaces are

detected in the 3D point clouds. For this task, we estimate the positions of the

adjacent vehicles and locate free parking spaces accordingly. Because position

estimation in 3D space can be complicated and time-consuming, we reduce the

dimensions of the structure from 3D to 2D. The 3D points, whose heights from the

ground plane are between 30~160 cm, are selected and the height information is

removed to reduce the dimensions. After that, we delete the isolated points by

counting the number of neighbors. Fig. 5.30(a) shows the dimension reduction result.

Since all points in Fig. 5.30(a) do not belong to the outermost surface of the

automobile, we select the outline points by using the relationship between the

incoming angle and the distance from the camera center. This procedure is performed

for better estimation of the position of the adjacent vehicle. The incoming angle is the

angle between the horizontal axis and the line joining the camera center and a 2-D

point. Fig. 5.30(a) is re-depicted in Fig. 5.30(b) by using the incoming angle and the

Page 113: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

95

distance from the camera center. Since the points on the same vertical line comes

from the same incoming angle in Fig. 5.30(b), the nearest point from the camera

center among the points on the same vertical line is recognized as the outline point.

Fig. 5.30(c) shows the result of outline point selection.

(a) (b) (c)

Fig. 5.30: Outline point selection; (a) dimension reduction result, (b) re-depicted 2D

points with the incoming angle and the distance from the camera center, (c) result of

outline point selection. (0, 0) indicates the camera center.

If the automobile shape is assumed to be a rectangle as seen from the top, the

position of the adjacent vehicle can be represented by a corner point and orientation.

Therefore, we estimate the corner point and orientation of the adjacent vehicle and

use these values to locate free parking spaces. Since the reconstructed structure is

noisy and includes not only adjacent vehicles but also other obstacles, we use a

projection-based method. This method rotates the 2D points and projects them onto

the X-axis and Z-axes. It discovers the rotation angle which maximizes the sum of the

maximum peak values of the two projection results. The rotation angle and the

locations of the two maximum peak values are recognized as the orientation and the

Page 114: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

96

corner point, respectively. This method estimates the corner point and orientation at

the same time, and it is robust to noisy data.

However, when using this method, we cannot know whether the estimated

orientation is longitudinal or lateral. To determine this, it is assumed that a driver

turns right when a free parking space is located on the left, and vice versa. This

assumption helps us to determine the orientation by using the turning direction of

automobile estimated from the rotation matrix.

After estimating the corner point and orientation, the points on the

longitudinal side of the adjacent vehicle are selected and used for refining the

orientation by using RANSAC-based line estimation. This procedure is needed

because the lateral side of automobiles is usually curved so the longitudinal side gives

more precise orientation information. The corner point is also refined according to the

refined orientation.

To locate the most appropriate free parking spaces, other adjacent vehicles

located opposite the estimated vehicle are also searched. The search range is set as

Fig. 5.31(a) by using the estimated corner point and orientation. We set a circle with a

radius of 150 cm and its center is located 300 cm away from the corner point in the

lateral direction. If there are point clouds inside the search range, the other vehicle is

considered to be found and the free parking space is located in the middle of two

vehicles in the lateral direction. The corner points of two adjacent vehicles are

projected in a longitudinal direction and the outer one is used to locate free parking

spaces. This is described in Fig. 5.31(b). In this figure, corner point 1 is selected

because this is the outer one. If the other vehicle is not found, the free parking space

Page 115: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

97

is located beside the detected adjacent vehicle with a 50 cm interval in the lateral

direction. Fig. 5.32 shows the final result of the detection process. The width and

length of the free parking space were set as 180 cm and 480 cm, respectively.

-700 -600 -500 -400 -300 -200 -100 0 100 200 300

0

100

200

300

400

500

600

700

800

900

X (cm)

Z (

cm)

150c

m30

0cm

Estimated orientation

Search range

Corner point

-700 -600 -500 -400 -300 -200 -100 0 100 200 300

0

100

200

300

400

500

600

700

800

900

X (cm)

Z (

cm)

Corner point 2

Corner point 1

Free parking space

(a) (b)

Fig. 5.31: Opposite side vehicle detection; (a) search range of other adjacent vehicles,

(b) free parking space localization.

Fig. 5.32: Detected free parking space depicted on the last frame of the image

sequence.

Page 116: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

98

5.4 Scanning Laser Radar-Based Method

Alexander Schanz et al. and CyCab project can be considered as good

examples of using the advantages of scanning laser radar as it can recognize precisely

the boundary of a parked vehicle both in daytime and nighttime. Although Alexander

Schanz et al. used scanning laser radar as a precise side range sensor, they still had

drawbacks in terms of the odometry-based registration, such as potential inaccuracy,

restricted driving path during free space sensing, and memory/computing-intensive

grid operation [20], [40], [41]. Furthermore, the CyCab project used almost the same

configuration as our proposed method; their vehicle boundary recognition used

impractical assumptions that all parked vehicles belong to only one class of vehicle

[42]. Our previous work showed that ‘L’-shaped template matching could robustly

recognize the boundary information of parked vehicles [43]. Such approach is

different from adaptive cruise control (ACC)-oriented scanning laser radar

applications, which focuses on simple invariant description and the detection/tracking

of vehicles in the forward and far distance [44].

This section explained developed scanning laser radar-based target position

designation method for perpendicular parking situations [50]. The proposed method

uses only one range data set without odometry, and assumes that a vehicle appears as

an ‘L’-shaped cluster in the range data. Target parking position is expected to be the

nearest free parking space between parked vehicles. Proposed algorithm consists of

three phases: preprocessing of range data, corner detection, and target parking

position designation. The preprocessing of range data consists of noise elimination,

Page 117: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

99

occlusion detection, and cluster recognition. Corner detection consists of two phases:

rectangular corner detection and round corner detection. Target parking position

designation consists of main-reference vehicle recognition, subreference vehicle

recognition, and target position establishment. Compared with the earlier algorithm

[43], round corner detection is added to precisely recognize round front-end of

vehicles. Furthermore, because newly developed corner detection methods use the

least assumption and normalized fitness, they are proved robust to various noises and

situations. Target parking position designation is also improved to consider

surrounding environment when detecting a free parking slot. The algorithm that was

tested in various situations overcame almost all conditions that were critical to other

methods.

5.4.1 Removal of Invalid and Isolated Data

Figure 5.33 shows a typical situation wherein a driver is trying to park the

subjective vehicle between parked vehicles, which is the main situation considered in

this article. Fig. 5.33(a) shows an image captured by the rearview camera of the

subjective vehicle, and Fig. 5.33(b) depicts range data captured by scanning laser

radar installed on the left side of the subjective vehicle’s rear-end. In Fig. 5.33(b), the

x-axis represents the scanning angle and the y-axis represents the range value.

Because scanning laser radar outputs only one range value for each scanning angle or

pan angle, range data can be stored and processed in one-dimensional (1D) array with

a fixed length.

Page 118: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

100

50 100 150 200 250 300 350

-100

-50

0

50

100

150

Angle (degree)

Ran

ge (

m)

(a) (b)

Fig. 5.33: Situation wherein free parking space is between parked vehicles; (a)

rearview image, (b) range data.

Preprocessing eliminates invalid data having zero range value and isolated

data caused by random noise. The isolated data is defined as a range data whose

minimum value of distances to two neighboring data, that is, left and right neighbor is

larger than a threshold, for example, 0.5m. Fig. 5.34 shows the result of invalid data

and isolated data removal.

50 100 150 200 250

-40

-20

0

20

40

60

80

100

120

Angle (degree)

Ran

ge (

m)

Fig. 5.34: Range data after the removal of invalid data and isolated data.

Page 119: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

101

5.4.2 Occlusion Detection

Occlusion is defined as a point where consecutive range value is

discontinuous, and a sequence of continuous valid range data between two occlusions

is defined as a cluster. While investigating range data arranged in the scanning angle,

a transition from invalid to valid data is recognized as the left end of the cluster and a

vice versa transition as the right end of the cluster. In other words, if the difference

between two consecutive range values, (n-1)th and nth range data, is larger than a

threshold, for example, 0.5m, then (n-1)th data is recognized as the right end of a

cluster and nth data is recognized as the left end of the next cluster. Fig. 5.35 shows

the result of occlusion detection. The left end and right end of the cluster are

represented by a circle and rectangle, respectively. Fig. 5.35(b) shows valid range

data and recognized occlusions in Cartesian coordinate system. The subjective vehicle

is depicted as a filled rectangle and its left-rear end is located at coordinates (0, 0).

50 100 150 200 250

-40

-20

0

20

40

60

80

100

120

Angle (degree)

Ran

ge (

m)

-5 0 5 10 15 20 25 30 35

-5

0

5

10

15

20

25

30

35

x (m)

y (m

)

xy

(a) (b)

Fig. 5.35: Occlusion detection result; (a) in polar coordinate system, (b) in Cartesian

coordinate system.

Page 120: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

102

5.4.3 Fragmentary Cluster Removal

Clusters with a very small size are supposedly caused by noise. If either the

range data number from left end to right end of a cluster is less than its threshold (e.g.,

5) or the geometrical length is less than its threshold, (e.g., 0.25m), then the cluster is

regarded as a fragment and is eliminated. If the neighboring clusters of the eliminated

cluster are near and co-directional, linear interpolation connects the neighboring two

clusters. Fig. 5.36 shows the result of fragmentary cluster removal.

50 100 150 200 250

-60

-40

-20

0

20

40

60

80

100

120

Angle (degree)

Ran

ge (

m)

-10 -5 0 5 10 15 20 25 30 35

-5

0

5

10

15

20

25

30

35

x (m)

y (m

)

xy

(a) (b)

Fig. 5.36: Result of fragmentary cluster removal; (a) in polar coordinate system, (b) in

Cartesian coordinate system.

5.4.4 Corner Detection

It is assumed that range data from scanning laser radar is acquired in a parking

lot. Because a major recognition target, such as vehicle and pillar, appears as an ‘L’-

Page 121: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

103

shaped cluster in range data, simple corner detection can recognize vehicle and pillar.

On the other hand, objects apart from vehicle and pillar do not have ‘L’-shaped

clusters and these can be ignored during the following procedures.

Corner detection consists of rectangular corner detection and round corner

detection, as depicted in Fig.5.37. A large number of vehicles have rectangular front

and rear ends, which can be fitted by rectangular corner. The rectangular corner can

be represented by two lines meeting orthogonally. Some vehicles have front ends with

such large curvatures that should be fitted by round corner. The round corner can be

represented by a line and an ellipse meeting at a point.

Fig. 5.37: Flowchart of corner detection.

If the fitting error of rectangular corner detection is less than a threshold, for

example, 0.2, it is recognized as a rectangular corner. If the fitting error is larger than

the first threshold but less than a marginal threshold, for example, 0.6, it is retested by

Page 122: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

104

round corner detection. If the fitting error of round corner detection is less than the

threshold, for example, 0.2, it is recognized as a round corner. Otherwise, it is

determined to be unrelated with vehicle or pillar, and can be ignored.

5.4.5 Rectangular Corner Detection

For each point of a cluster, assuming the point is the vertex of the ‘L’-shape,

one optimal corner is detected in the sense of least squared (LS) error. Among the

detected corners, one with the smallest fitting error is recognized as the corner

candidate of the cluster.

An ‘L’-shaped cluster is supposed to consist of two lines meeting orthogonally.

Points before vertex of corner should be close to the first line and points after vertex

should be close to the second line. Therefore, ‘L’-shaped template matching is equal

to an optimization problem minimizing the sum of fitting errors, fitting error of points

before vertex with respect to the first line and fitting error of points after vertex with

respect to the second line, given that the two lines are orthogonal. Because the two

lines are orthogonal, the first line l1 and the second line l2 can be represented as in

(5.32). Given that the number of points of a cluster is N and the index of investigated

point Pn is n as shown in Fig. 5.38, points with index from 1 to n should satisfy l1 and

points with index from n to N should satisfy l2. Equation (5.33) expresses this fact in

linear algebraic form. An denotes measurement matrix and Xn denotes parameter

matrix when the investigated point index is n.

1

2

: 0

: 0

l ax by c

l bx ay d

+ + =− + =

(5.32)

Page 123: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

105

1 1 1 0

1 0

0 1

0 1

n n

n n

N N n

n

x y

a

x y b

y x c

d

y x

⋅ = − −

0

X

A

M

M

1 4 442 4 4 43

(5.33)

Fig. 5.38: Points before Pn should satisfy l1 and points after Pn should satisfy l2.

Nontrivial parameter matrix Xn satisfying (5.33) is the null vector of

measurement matrix An. Therefore, parameters of l1 and l2 can be estimated by

finding the null vector of An using singular value decomposition (SVD) as shown in

(5.34). Un, Sn, and Vn respectively, denotes output basis matrix, singular value matrix

and input basis matrix of An when the investigated point index is n. Because the

singular value corresponding to the null vector is ideally zero, nonzero singular value

can be considered as error measure. When the investigated point index is n,

rectangular corner fitting error of cluster C is Sn(4, 4), denoted by Εrectangular corner(C, n).

Page 124: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

106

Corresponding to this, the fourth column of Vn is the estimated parameter set.

[ , , ] ( )n n n nSVD=U S V A (5.34)

After measuring Εrectangular corner(C, n) of points with index from 2 to (N-1) in

cluster C, one case with the smallest value is recognized as the corner candidate of the

cluster. Fig. 5.39(a) is the graph of measured Εrectangular corner(C, n) of points with index

from 2 to (N-1), and Fig. 5.39(b) shows the recognized corner candidate. With

recognized parameters, two lines are drawn and the point with the smallest fitting

error, i.e., n = 6, is marked by ‘o’.

1 2 3 4 5 6 7 8 9 10 110

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

n

erro

r

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

x (m)

y (m

)

(a) (b)

Fig. 5.39: Rectangular corner fitting error and corner candidate; (a) Εrectangular corner(C, n),

(b) corner candidate.

Fig. 5.40 shows the crosspoint of two recognized sides, (xc, yc), which is

recognized as the refined vertex of the corner. The two unit vectors parallel to the two

Page 125: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

107

sides, d1 and d2, are calculated. In this case, d1 is set to be in the direction of the

longer side and d2 is set to be in the direction of the shorter side.

Fig. 5.40: The vertex of corner is refined to the crosspoint of two recognized lines.

However, the rectangular corner detection has one problem when some end

points of a cluster having line shape deviate from the line center as shown in Fig.

5.41(a). In this case, although the whole structure of the cluster is definitely a line, the

deviated points force Εrectangular corner(C, n) to have small value as shown in Fig. 5.41(b).

To avoid this pitfall, line-fitting error, that is, LS fitting error is employed,

when the cluster is fitted to a line. If all points in a cluster C belong to a line, they

should satisfy a line equation l in (5.35). This fact can be expressed in linear algebraic

form in (5.36). Nontrivial parameter matrix X is the null vector of measurement

matrix B, and the singular value corresponding to the null vector is line-fitting error

of cluster C, which is denoted by Εline(C).

Page 126: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

108

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

x (m)

y (m

)

1 2 3 4 5 6 7 8 9 10

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

n

erro

r

(a) (b)

Fig. 5.41: Cluster having line shape also has small Εrectangular corner(C, n) value; (a) a

cluster having line shape, (b) Εrectangular corner(C, n).

: 0l ax by c+ + = (5.35)

1 1 1

1N N

x y a

b

x y c

⋅ = 0

XB

M

1 442 4 43

(5.36)

The error of the corner candidate of a cluster C, denoted by Εcorner(C), given by

(5.37), is defined by the minimum of Εrectangular corner(C, n) divided by Εline(C). Because

the Εline(C) value of a cluster in line shape is small, its Εcorner(C) has relatively large

value despite small Εrectangular corner(C, n). Consequently, normalization by Εline(C)

prevents cluster in line shape from being recognized as rectangular corner. Fig. 5.42

shows the effect of normalization by Εline(C): cluster in ‘L’-shape has small error

value whereas cluster in line shape has definitely large error value. The number

beside recognized corner denotes Εcorner(C).

Page 127: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

109

min ( , )( )

( )

rectangular cornern

cornerline

C nC

C=

E

EE

(5.37)

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

x (m)

y (m

)

0.12841

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

x (m)

y (m

)

0.47198

(a) (b)

Fig. 5.42: Comparison of Εcorner(C) between corner and line case; (a) corner cluster;

Εcorner =0.12841, (b) line cluster; Εcorner =0.47198.

Newly developed rectangular corner detection is proved to be unaffected by

orientation and robust to noise as it is based on least square fitting of implicit function.

Because it does not use any kind of assumption about line length and distance

between range data, it is expected to be superior to conventional rectangular corner

detection that divides points into two groups by potential vertex and fits the groups

into two lines, respectively, and then finds the optimal vertex by evaluating

rectangular constraint. Fig. 5.43(a) shows that the newly developed rectangular corner

detection is unaffected by orientation because it is based on implicit expression. Fig.

5.43(b) shows that the developed ‘L’-shaped template matching can detect correct

rectangular corner in spite of heavy noise and uneven point interval.

Page 128: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

110

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

x (m)

y (m

)

0.051062

0 2 4 6 8 100

1

2

3

4

5

6

7

8

9

10

x (m)y

(m)

0.15481

(a) (b)

Fig. 5.43: Newly developed rectangular corner detection is unaffected by orientation

and robust to noise; (a) case of different orientation, (b) case with heavy noise and

uneven sampling.

5.4.6 Round Corner Detection

Rectangular corner detection cannot avoid severe error when the front or rear

end of a vehicle is considerably round as shown in Fig. 5.44(a). It is because such

range data does not follow the assumption of rectangular corner detection that vehicle

corner can be modeled by two lines meeting orthogonally. Therefore, clusters

marginally failed in rectangular corner detection are retested by round corner

detection, which models vehicle corner by the connection of a line and a curve.

Without losing generality, the curve is formulated by an ellipse arc. Fig. 5.44(b)

shows correct corner recognized by round corner detection.

Page 129: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

111

-2 0 2 4 6

-6

-4

-2

0

2

4

6

8

x (m)

y (m

)

0.21687

-2 0 2 4 6

-6

-4

-2

0

2

4

6

8

x (m)y

(m)

0.12899

(a) (b)

Fig. 5.44: Effect of round corner detection; (a) rectangular corner detection result, (b)

round corner detection result

Round corner detection fulfills line-curve corner detection and curve-line

corner detection, and then selects one with smaller fitting error as the corner

candidate of a cluster. According to this viewpoint, a vehicle with round front or rear

end appears as either a line-curve combination or a curve-line combination in range

data. Line-curve corner detection fulfills LS fitting assuming a corner is composed of

a line and an ellipse arc. Contrarily, curve-line corner detection fulfills LS fitting

assuming a corner is composed of an ellipse arc and a line.

In the case of line-curve corner detection, given that the number of points of a

cluster is N and the index of investigated point Pn is n, points with index from 1 to n

should satisfy l1 and points with index from n to N should satisfy e2, given by (5.38).

Page 130: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

112

12 2

2

: 0

: 0

l px qy r

e ax bxy cy dx ey f

+ + =+ + + + + =

(5.38)

By applying SVD to points with index from 1 to n of cluster C, parameters of

line equation l1 can be estimated. The fitting error of the line portion Εline portion(C, n),

given in (5.39), is defined by the summation of squared algebraic errors. By applying

SDLS (stable direct least square) ellipse fitting [45] to points with index from n to N

of cluster C, parameters of ellipse equation e2 can be estimated. The fitting error of

the ellipse portion Εellipse portion(C, n), given in (5.40) is defined by the summation of

squared algebraic errors. When the investigated point is nth point of cluster C, the

line-curve fitting error Εline-curve corner(C, n), given in (5.41), is defined as the sum of

Εline portion(C, n) and Εellipse portion(C, n). After evaluating Εline-curve corner(C, n) for points of

cluster C with index from 5 to (N-5), a corner with the minimum fitting error is

recognized as the line-curve corner candidate of the cluster. Similar to the case of

rectangular corner detection, the error of the line-curve corner candidate of the cluster

C, denoted by Εline-curve corner(C), shown in (5.42), is defined by the minimum of Εline-

curve corner(C, n) divided by Εline(C). The term Εline(C) means the line fitting error as

defined by (5.35) and (5.36), and gives (5.42) with normalization effect. It is

noteworthy that index n of (5.39) and (5.40) has different meaning, respectively, the

final index in (5.39) and the initial index in (5.40).

( )2

1

( , )n

line portion i ii

C n px qy r=

= + +∑E (5.39)

Page 131: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

113

( )22 2( , )N

ellipse portion i i i i i ii n

C n ax bx y cy dx ey f=

= + + + + +∑E (5.40)

( , ) ( , ) ( , )line-curve corner line portion ellipse portionC n C n C n= +E E E (5.41)

min ( , )( )

( )

line-curve cornern

line-curve cornerline

C nC

C=

E

EE

(5.42)

In the case of curve-line corner detection, given that the number of points of a

cluster is N and the index of investigated point Pn is n, points with index from 1 to n

should satisfy e1 and points with index from n to N should satisfy l2, which is given by

(5.43). It is the reverse case of (5.38).

2 21

2

: 0

: 0

e ax bxy cy dx ey f

l px qy r

+ + + + + =+ + =

(5.43)

By applying SDLS ellipse fitting to points with index from 1 to n of cluster C,

parameters of ellipse equation e1 can be estimated. The fitting error of the ellipse

portion Εellipse portion(C, n), given by (5.44), is defined by the summation of squared

algebraic errors. By applying SVD to points with index from n to N of cluster C,

parameters of line equation l2 can be estimated. The fitting error of the line portion

Εline portion(C, n), expressed in (5.45), is defined by the summation of squared algebraic

errors. When the investigated point is nth point of cluster C, the curve-line fitting

error Εcurve-line corner(C, n), expressed in (5.46), is defined as the sum of Εellipse portion(C, n)

and Εline portion(C, n). After evaluating Εcurve-line corner(C, n) for points of cluster C with

index from 5 to (N-5), a corner with the minimum fitting error is recognized as the

curve-line corner candidate of the cluster. Similar to the case of line-curve corner

detection, the error of the curve-line corner candidate of the cluster C, denoted by

Page 132: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

114

Εcurve-line corner(C), given by (5.47), is defined by the minimum of Εcurve-line corner(C, n)

divided by Εline(C). It is worth noting that index n of (5.44) and (5.45) has different

meanings, respectively: the final index in (5.44) and the initial index in (5.45). It

makes the differences between (5.39) and (5.45) and between (5.40) and (5.44).

( )22 2

1

( , )n

ellipse portion i i i i i ii

C n ax bx y cy dx ey f=

= + + + + +∑E (5.44)

( )2( , )

N

line portion i ii n

C n px qy r=

= + +∑E (5.45)

( , ) ( , ) ( , )curve-line corner ellipse portion line portionC n C n C n= +E E E (5.46)

min ( , )( )

( )

curve-line cornern

curve-line cornerline

C nC

C=

E

EE

(5.47)

Round corner detection of cluster C fulfills both line-curve detection and

curve-line detection. As the result, two fitting errors are evaluated, respectively: Εline-

curve corner(C) is defined by (5.42) and Εcurve-line corner(C) is defined by (5.47). The result

of the round corner detection, that is, new corner candidate of the cluster, is set to the

case with smaller fitting error. The error of the corner candidate of cluster C, denoted

by Εcorner(C), given by (5.48), is defined as the minimum of two fitting errors.

Consequently, only if Εcorner(C) is smaller than a threshold, for example, 0.2, the

cluster is recognized as a valid corner. Otherwise, the cluster is ignored in the

following procedures.

( )( ) min ( ), ( )corner line-curve corner curve-line cornerC C C=E E E (5.48)

If a cluster is recognized as a round corner, d1 is set to be in the direction from

connecting point to line-segment end point and d2 is set to be parallel with ellipse

Page 133: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

115

long axis in the direction from connecting point to ellipse center. The vertex of a

round corner is set by finding the crosspoint of line and ellipse long axis and then

shifting it in the direction of ellipse short axis by ellipse short radius.

Fig. 5.45 shows the result of round corner detection when a cluster has a

curve-line type. Figs. 5.45(a), (b), and (c) depict the graphical representations of Εline

portion(C, n), Εellipse portion(C, n), Εline-curve corner(C, n) of line-curve corner detection,

respectively. Fig. 5.45(d) shows the result of line-curve corner detection. Figs. 5.45(e),

(f), and (g) depicts the graphical representations of Εline portion(C, n), Εellipse portion(C, n),

Εcurve-line corner(C, n) of curve-line corner detection, respectively. Fig. 5.45(h) shows the

result of curve-line corner detection. Comparing Figs. 5.45(c) and (g), it is observed

that Εcurve-line corner(C) is smaller than Εline-curve corner(C). Therefore, Εcorner(C) is set to

Εcurve-line corner(C) and because its value is smaller than the threshold, the cluster is

recognized as round corner in curve-line type, as shown in Fig. 5.45(i). The number

beside corner point denotes Εcorner(C).

0 20 40 60 80 100 1200

50

100

150

200

250

300

n

Alg

ebra

ic e

rror

0 20 40 60 80 100 120

0

20

40

60

80

100

120

n

Alg

ebra

ic e

rror

(a) (b)

Page 134: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

116

0 20 40 60 80 100 1200

50

100

150

200

250

300

n

Alg

ebra

ic e

rror

-2 0 2 4 6 8

-4

-3

-2

-1

0

1

2

3

4

x (m)

y (m

)

(c) (d)

0 20 40 60 80 100 1200

50

100

150

200

250

n

Alg

ebra

ic e

rror

0 20 40 60 80 100 1200

10

20

30

40

50

60

70

80

90

n

Alg

ebra

ic e

rror

(e) (f)

0 20 40 60 80 100 1200

50

100

150

200

250

n

Alg

ebra

ic e

rror

-3 -2 -1 0 1 2 3 4 5 6 7

-4

-3

-2

-1

0

1

2

3

4

x (m)

y (m

)

(g) (h)

Page 135: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

117

-2 0 2 4 6 8

-5

-4

-3

-2

-1

0

1

2

3

4

x (m)

y (m

)

0.17188

(i)

Fig. 5.45: Results of round corner detection when the cluster is curve-line type; (a)

Εline portion(C, n) of Εline-curve corner(C, n), (b) Εellipse portion(C, n) of Εline-curve corner(C, n), (c)

Εline-curve corner(C, n), (d) Result of line-curve detection, (e) Εline portion(C, n) of Εcurve-line

corner(C, n), (f) Εellipse portion(C, n) of Εcurve-line corner(C, n), (g) Εcurve-line corner(C, n), (h)

result of curve-line corner detection, (i) final result of round corner detection.

5.4.7 Main-Reference Corner Recognition

Among the corners satisfying free parking space conditions within the

perpendicular parking region of interest (ROI), the nearest to the subjective vehicle is

recognized as the main-reference corner. Main-reference corner denotes the corner

that belongs to one of two vehicles in contact with free parking space and appears as

an ‘L’-shape in range data.

It is assumed that range data is acquired from a location where the driver starts

perpendicular parking manually. Therefore, target parking position is supposed to be

within ROI, which is established by FOV (field of view) and maximum distance. In

experiments, FOV is set to rearward 160°, and maximum distance is set to 25m.

Page 136: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

118

Corners out of the ROI are regarded as irrelevant to the parking operation and are thus

ignored. Fig. 5.46(a) and (b) show the results of corner detection and ROI application,

respectively. Only corners with Εcorner(C) will be examined in the next phases.

-5 0 5 10 15

0

5

10

15

20

x (m)

y (m

)

0.16769

0.17322

0.16946

0.11333

0.17902

0.1581

0.1173

0.19663

0.14654

0.16094

0.012292

x

y

-10 -5 0 5 10 15-5

0

5

10

15

20

25

x (m)

y (m

)

0.17322

0.16946

0.17902

0.1581

0.1173

0.19663

0.14654

x

y

Removed Corners

Removed Corners

(a) (b)

Fig. 5.46: Corners that remained only within ROI; (a) initially detected corners

(b) remaining corners after ROI application.

By checking whether there is any object in the reverse direction of d1 and d2 of

each corner, it is investigated whether each corner contacts with free parking space

(Fig.5.47). Further, by finding clusters within a certain FOV (e.g., 90°), in the reverse

direction of d1 and d2, respectively, two conditions for each direction are tested:

whether there is any cluster within the distance of vehicle width, and whether there is

any cluster within the distance of vehicle length. If the investigated corner contacts

with viable free parking space, cluster should exist between vehicle-width distance

and vehicle-length distance in one direction but no cluster should exist within vehicle-

length distance in the other direction. If there is any cluster within the vehicle-width

Page 137: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

119

distance in either direction, or if there is no cluster within vehicle-length distance in

both directions, then the corner is determined not to be adjacent to viable free parking

space. It is noteworthy that the proposed system regards the situation when there is no

cluster in both directions within vehicle-length distance as an erroneous situation. In

other words, such a situation is caused by sensing error or a situation with wide

parking space. In the latter case, the feasibility of automatic parking is expected to be

so low that there is little need to establish target parking position, in spite of the

possibility of sensing error.

During corner detection, in the case of rectangular corner, d1 is set to be in the

direction of the longer side and d2 is set to be in the direction of the shorter side. In

the case of round corner, d1 is set to be in the direction of the line portion and d2 is set

to be in the direction of the longer axis of the ellipse. After the investigation of free

parking space conditions, d1 is set to be in the direction where there is no cluster and

d2 is set to be in the direction where cluster is found.

Fig. 5.47: Free parking space condition: Does the corner contact with viable free

parking space?

Page 138: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

120

Fig. 5.48 shows remaining corners after the investigation of free parking space

conditions. It can be observed that although there are many detected corners initially,

corners satisfying free parking space conditions are few. Fig. 5.49 shows recognized

main-reference corner by selecting the nearest corner to the subjective vehicle.

-5 0 5 10 15

0

5

10

15

20

x (m)

y (m

)

0.17322

0.17902

0.1581

0.19663

0.14654

x

y

Removed Corners

Fig. 5.48: Corners satisfying free parking space conditions.

Page 139: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

121

Fig. 5.49: Recognized main-reference corner.

5.4.8 Subreference Corner Detection and Target Parking Position Establishment

Among the occlusions and corners located within a certain FOV (e.g., 90°) in

the reverse direction of d2 of main-reference corner, the nearest is recognized as the

subreference corner, which will be used to establish target parking position.

Vehicles or pillars adjacent to free parking space can have severely different

coordinates in depth direction. In such cases, the line connecting main-reference

corner and subreference corner cannot be aligned to the direction of desired target

parking position. To solve this problem, the two coordinates in d1 axis are compared:

the projection of subreference corner onto a line passing the vertex of main-reference

corner with d1 direction, and the vertex of main-reference corner, as shown in

Fig.5.50. The line located farther in depth direction is used as an outward border of

target parking position. Fig. 5.50 shows the established outward border using

Page 140: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

122

subreference corner and main-reference corner.

Once main-reference corner, subreference corner, and outward border are

established, target parking position can be established by locating the center of width

side of target parking position at the center point between main-reference corner and

subreference corner along the outward border. Target parking position is a rectangle

with the same width and the same length as the subjective vehicle. Fig. 5.51shows the

finally established target parking position. Fig. 5.52 shows a case when free parking

space is located between vehicle and pillar, and the outward border is determined by

subreference corner projection.

Fig. 5.50: Subreference corner projection and outward border.

Page 141: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

123

Fig. 5.51: Established final target parking position.

Fig. 5.52: A case when outward border is set by the projection of subreference corner.

5.4.9 Extension to Parallel Parking and Sensor Price Problem

The proposed method is able to handle parallel parking situations. In parallel

Page 142: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

124

parking situations, a parking aid system can use the same corner detection as it uses in

perpendicular parking situations. It just needs to change the distance threshold of free

parking space conditions and the orientation of the target parking position at the final

stage. As the ultrasonic sensor already shows acceptable performance in parallel

parking situations, in this article, the authors focused on the perpendicular parking

situation that was hard for the ultrasonic sensor to handle.

A major disadvantage of the proposed system might be the expensive price of

the sensor. However, a solution can be expected in the near future. Recently, a

scanning laser radar company announced that the cost of scanning laser radars will

decrease rapidly until prices reach €380 in 2010 [46]. Furthermore, if the scanning

laser radar can integrate multiple system functions into one system, the system can

replace multiple sensors and multiple electronic control units (ECUs) with one single

sensor and a single ECU. Consequently, the total price of the whole system will be

lower and the scanning laser radar-based system will become a practical solution even

from the economic viewpoint [43], [47], [48]. Scanning laser radars for the

integration of multiple system functions have already been developed and are now

available in the market [49]. Furthermore, because range data from scanning laser

radar is more reliable and one-dimensional, the processing unit managing range data

from scanning laser radar will be simpler and cheaper than processing units managing

complex vision data and millimeter-wave radar signals. Therefore, low-cost

processing units and the convenience of algorithm development are expected to

compensate for the high cost of scanning laser radar, to some extent.

Page 143: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

125

Chapter 6

Experimental Results

6.1 GUI-Based Method

To verify the efficiency of the proposed method, we measure the operation

time and clicking number, and then compare the drag and drop based method with the

multiple-arrow based method. For garage parking, it is observed that the operation

time is reduced by 17.6% and the clicking number is reduced by 64.2%. For parallel

parking, it is observed that the operation time is reduced by 29.4% and the clicking

number is reduced by 75.1%.

6.1.1 Experimental Method

Before test, we briefly explain the operation instruction of two methods: the

drag and drop based method and multiple-arrow based method. The multiple-arrow

based method is similar to the user interface of the first generation Prius. There are 10

arrow buttons, 8 for translation and 2 for rotation. Every participant establishes target

positions for 8 situations by both methods. Of these, 4 situations are garage parking

and the other 4 situations are parallel parking. For each 4 situations, 2 situations are

tested in the bird’s eye view image and the other 2 situations are tested in the distorted

image. Fig. 6.1~ 6.4 show the situation 1~8.

Total of 50 volunteers participate in the test. Average age is 30.1 in the range

of 22 ~ 42. Of these, 41 participants are male and 9 participants are female. Every

Page 144: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

126

participant conducts the test only once. The test order between the drag and drop

based method and multiple-arrow based method are mixed randomly.

(a) (b)

(c) (d)

Fig. 6.1: Garage parking cases in bird’s eye view image; (a) situation 1 with drag and

drop method, (b) situation 1 with arrows method, (c) situation 2 with drag and drop

method, (d) situation 2 with arrows method.

(a) (b)

Page 145: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

127

(c) (d)

Fig. 6.2: Garage parking cases in distorted image; (a) situation 3 with drag and drop

method, (b) situation 3 with arrows method, (c) situation 4 with drag and drop

method, (d) situation 4 with arrows method.

(a) (b)

(c) (d)

Fig. 6.3: Parallel parking cases in bird’s eye view image; (a) situation 5 with drag and

drop method, (b) situation 5 with arrows method, (c) situation 6 with drag and drop

method, (d) situation 6 with arrows method.

Page 146: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

128

(a) (b)

(c) (d)

Fig. 6.4: Parallel parking cases in distorted image; (a) situation 7 with drag and drop

method, (b) situation 7 with arrows method, (c) situation 8 with drag and drop

method, (d) situation 8 with arrow method.

6.1.2 Test Results

Table 6.1 shows the operation time average of 4 garage parking situations. It is

observed that the drag and drop based method reduces the operation time by 17.6%.

Table 6.2 shows the operation time average of 4 parallel parking situations. It is

observed that the drag and drop based method reduces the operation time by 29.4%.

Page 147: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

129

Table 6.1: Operation time average of garage parking situations.

Operation time average

Situation No. Drag and Drop(A) Multiple arrow(B) Enhancement,

( )B A

B

− (%)

1 11.9 17.2 30.6

2 11.6 12.7 9.1

3 11.7 16.1 27.5

4 11.2 11.5 3.1

Average 17.6

Table 6.2: Operation time average of parallel parking situations.

Operation time average

Situation No. Drag and Drop(A) Multiple arrow(B) Enhancement,

( )B A

B

− (%)

5 12.4 16.7 25.3

6 12.5 18.8 33.4

7 12.8 15.7 18.4

8 11.2 18.9 40.5

Average 29.4

Table 6.3 shows the clicking number average of 4 garage parking situations. It

is observed that the drag and drop based method reduces the clicking number by

64.2%. Table 6.4 shows the clicking number average of 4 parallel parking situations.

It is observed that the drag and drop based method reduces the clicking number by

75.1%. Reduction of clicking number means the reduction of repetitive operation.

Many participants evaluate the point as the most importance advantage of the

proposed drag and drop method because the repetitive clicking operation is a truly

tedious job.

Page 148: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

130

Table 6.3: Clicking number average of garage parking situations.

Clicking number average

Situation No. Drag and Drop(A) Multiple arrow(B) Enhancement,

( )B A

B

− (%)

1 7.1 24.0 70.5

2 6.4 16.7 61.8

3 6.6 20.9 68.5

4 6.6 15.0 56.1

Average 64.2

Table 6.4: Clicking number average of parallel parking situations.

Clicking number average

Situation No. Drag and Drop(A) Multiple arrow(B) Enhancement,

( )B A

B

− (%)

5 7.9 27.1 70.8

6 6.5 35.1 81.6

7 9.1 27.2 66.5

8 7.2 38.6 81.5

Average 75.1

It is noticeable that there is no tendency with respect to the view. There are no

definite difference between the distorted image cases and bird’s eye view image cases.

However, for parallel parking situations in bird’s eye view image, many participants

complain about the low quality of bird’s eye view image.

Page 149: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

131

6.2 Parking Slot Markings-Based Methods

6.2.1 Experimental Results of One Touch Type

(a) (b)

Fig. 6.5: Case with adjacent vehicles and torn markings; (a) recognized target parking

slot, (b) direction refinement by edge following.

In bird’s eye view image, objects above the ground surface are projected

outward from camera. Therefore, only if guideline and the ‘T’-shape junctions of

target parking slot are observed, proposed method can successfully detect cross-points,

related marking line-segments and separating line-segments as shown in Fig. 6.5. Fig.

6.5(a) is captured against the light and markings are torn to be noisy. In Fig. 6.5(b),

edge following based edge direction refinement overcomes the error of initial

direction estimation.

Separating line-segment detection method considering locally changing

illumination condition can successfully detect target parking-slot even if local

intensities are different from each other. Fig. 6.6 shows that Lseparating(s) can

Page 150: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

132

compensate local intensity variation.

(a)

(b) (c)

Fig. 6.6: Case with strong sunshine causing locally changing intensity: (a) recognized

target parking slot, (b) Ion(s) and Ioff(s), (c) Lseparating(s).

6.2.2 Experimental Results of Two Touch Type

The proposed method was applied to two types of parking slot marking, e.g.

rectangular type and 11-shape type. Experiments under various situations showed that

the proposed method can successfully establish target parking position in practical

usage.

Fig. 6.7 shows the recognized П-shape target patterns and established target

Page 151: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

133

parking position in the case of 11-shape type. Fig. 6.8 shows the recognized T-shape

target patterns and established target parking position in the case of rectangular type.

Fig.6.7: Established target parking position in the case of 11-shape type.

Fig. 6.8: Established target parking position in the case of rectangular type.

Especially, Fig. 6.9 shows the case when there is another marking line in front

of target parking position. This kind of situation can not be solved by one-touch type

target parking position designation method. It is because two-touch type method uses

only local image around the seed-point designated by the driver. Similarly, the

proposed method works even when dark shadow is cast on the target pattern as shown

in Fig. 6.10. It also shows the effect of over clustering during parking slot marking

Page 152: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

134

segmentation.

Fig. 6.9: Result when another marking line is drawn in front of target parking position.

Fig. 6.10: Result when dark shadow is cast on the near of target pattern.

Fig. 6.11 shows the case when the target position is far from the subjective

vehicle. In such a case, target pattern is distorted severely in the bird’s eye view

image generally. This problem disturbs the global image processing-based method

because blurred and distorted parking slot markings can not be extracted. Because

marking line separating parking slot from road way, named as guideline, is the key to

the one-touch type method, failure of guideline detection caused by blurred line

image is the major factor limiting the operation range of one-touch type method.

Page 153: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

135

(a) (b)

(c)

(d)

Fig. 6.11: Result when the target position is far; (a) recognized target patterns,

(b) established target position, (c) recognition process of the first target pattern, (d)

recognition process of the second target pattern.

As shown in Fig. 6.11(c) and (d), target patterns are severely distorted and

blurred. Even in this case, the proposed method finds marking line by over clustered

Page 154: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

136

segmentation and extracts parking marking region successfully. As the region of

marking line is distorted, the skeleton corresponding to the region is supposed to

contain severe noise as shown in Fig. 6.11(c) and (d). However, as the proposed

method finds the optimal placement of target pattern template using GA-based

optimization, it can detect target pattern successfully. Furthermore, as shown in Fig.

6.11(c) and (d), robustness and computation efficiency are increased by using cross-

points of skeleton as the candidate of target pattern center.

6.2.3 Experimental Results of Full-automatic Method

Even if the illumination is too dark and the most portion of parking slot

markings is undetectable, guideline tends to appear as a long and distinguishable edge

pair. Therefore, guideline can be certainly detected. Once the guideline is recognized,

dividing line-segment recognition using weak condition can detect dividing marking

line-segments that is missed by the marking line-segment recognition. Fig. 6.12

shows the recognition result example.

Page 155: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

137

(a) (b)

(c) (d)

(e)

Fig. 6.12: Case study with occlusion by adjacent vehicles; (a) input image, (b)

distortion of adjacent vehicles, (c) peak pair detection removes noise from adjacent

vehicles, (d) recognized guideline in spite of mis-detected line-segments, (e)

additionally recognized dividing marking line-segment.

Proposed method for the recognition of parking slot markings successfully

operates in the situation when the adjacent slots are occupied by vehicles. Obstacles

including adjacent vehicles generate severe distortions in radial manner in bird’s eye

Page 156: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

138

view image. The reason is that bird’s eye view image is constructed using plane

surface assumption and obstacle above the ground surface does not satisfy the

constraints. In other words, obstacles, which do not belong to the ground surface, will

be projected as if it is drawn at the farther location. Especially, bright object among

dark background will be stretched long and make a distortion, which can be confused

with a marking line-segment. Proposed method alleviates these distortions through

two phases:

(1) In the edge image of bird’s eye view image, marking line-segment appears as a

parallel line-segment pair with constant between-distance. Therefore, distortions

unsatisfying the constraint will be removed through the peak pair detection in

Hough space and the line-segment recognition

(2) If vehicles are normally parked in the adjacent slots beside target free slot,

distortions in bird’s eye view image caused by these vehicles are supposed to

locate beyond the guideline along the perspective direction. Consequently, in spite

of the adjacent vehicle’s distortion, proposed method can detect the guideline

successfully. Once the guideline is recognized correctly, ‘T’-shape junction

searching can detect dividing marking line-segments and removes the rest noise.

Page 157: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

139

6.3 Free Space-Based Methods

6.3.1 Experimental Results of Binocular Stereo-Based Method

Fig. 6.13 shows the example of established target parking space by binocular

stereo. Fig. 6.13 (c) shows the result on the bird’s eye view of parking site marking

and Fig. 6.13 (d) projects the result on the bird’s eye view of input image. Because the

search range is narrowed by the obstacle depth map, template matching successfully

detects the correct position in spite of stain, blurring and shadow. Furthermore,

template matching, which is the bottleneck of localization process, consumes little

time. Total computational time on 1GHz PC is about 400~500 msec.

(a) (b)

(c) (d)

Fig. 6.13: Detected parking site; (a) left input image, (b) right input image, (c) result

on parking site markings, (d) result on input image.

Page 158: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

140

6.3.2 Experimental Results of LSP-Based Method

Fig. 6.14 shows that proposed method successfully recognize free parking

space even when the nearer side of parking space is occupied by vehicle. Fig. 6.15

shows that proposed method successfully recognize free parking space between

vehicles in spite of dark illumination condition. Especially, it is noticeable that

although the vehicle on the further side of free parking space has black color,

proposed method can detect light stripe on the vehicle’s corner and side surface and

successfully designate target parking position.

Fig. 6.14: Target position when neighboring side is a vehicle.

Fig. 6.15: Recognized target position between vehicles.

Page 159: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

141

Fig. 6.16 shows the case when the depth of parked adjacent vehicle is

considerably different from the depth of adjacent pillar. Because proposed method

estimates the side line of opposite-side object and establishes opposite-side reference

point by projecting pivot onto the line, it can comparatively successfully designate

target position.

Fig. 6.16: Recognized target position when adjacent objects have a large difference in

depth.

6.3.3 Experimental Results of Motion Stereo-Based Method

The proposed system was tested in 154 different parking situations. From the

database, 53 sequences were taken with the laser scanner data and 101 sequences

were taken without it. The image sequences were acquired with a fisheye camera. Its

horizontal and vertical fields of view were about 154° and 115°, respectively. The

image resolution and frame rate were 1024×768 pixels and 15 fps, respectively. We

analyzed the results in terms of success rate and detection accuracy. For success rate,

a manual check was performed to determine whether the detected space was located

Page 160: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

142

inside the free space. For detection accuracy, the errors of the estimated corner point

and orientation of the adjacent vehicle were measured by using laser scanner data.

The experimental results consist of three parts. First, we compare the reconstructed

structures when using and without using the proposed feature selection and 3D

mosaicking methods. Second, the successes and failures of the system are discussed.

Third, the accuracies of the estimated corner point and orientation are presented.

1) Comparison of the Reconstructed Structures

In this experiment, we reconstructed the 3D rearview structures when using

and without using the proposed feature selection and 3D mosaicking methods and

compared them to the laser scanner data. The laser scanner was the SICK LD-

OEM1000 [89]. Its angular resolution and depth resolution are 0.125° and 3.9mm,

respectively and the systematic error is ±25 mm. Fig. 6.17 shows the fisheye camera

and the laser scanner mounted on the automobile. These two sensors were pre-

calibrated.

Laser scanner

Fisheye camera

Fig. 6.17: Fisheye camera and laser scanner mounted on the automobile.

Two comparison results are shown in Fig.6.18. The reconstructed structures

are depicted as seen from the top after removing the points near the ground plane. Fig.

Page 161: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

143

6.18(a) shows the last frames of two image sequences and the points on the vehicle

indicate the locations of the epipoles. Fig. 6.18(b) and (c) show the reconstructed

rearview structures when using and without using the proposed method, respectively

and the blue and red points indicate the reconstructed points and the laser scanner

data, respectively.

-400 -300 -200 -100 0 100 200 300 400

0

100

200

300

400

500

600

700

800

X (cm)

Z (

cm)

-400 -300 -200 -100 0 100 200 300 400

0

100

200

300

400

500

600

700

800

X (cm)Z

(cm

)

-400 -300 -200 -100 0 100 200 300 400

0

100

200

300

400

500

600

700

800

X (cm)

Z (

cm)

-400 -300 -200 -100 0 100 200 300 400

0

100

200

300

400

500

600

700

800

X (cm)

Z (

cm)

(a) (b) (c)

Fig. 6.18: 3D reconstruction accuracy; (a) the last frames of two image sequences, (b)

reconstructed structures when using the proposed method, (c) reconstructed structures

without using the proposed method.

By using this comparison, we can observe three advantages of the proposed

feature selection and 3D mosaicking methods. First, it reduces the number of

erroneously reconstructed points. The structures in Fig. 6.18(c) shows more erroneous

points outside the ground truth data than those in Fig. 6.18(b) because the proposed

method removes the point correspondences near the epipole and far from the camera

Page 162: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

144

center. Second, it increases the amount of information about adjacent vehicles. The

structures in Fig. 6.18(b) are more detailed than those in Fig. 6.18(c) because the

density of the points on the adjacent vehicles is increased by mosaicking several 3D

structures. Third, it enhances metric recovery results. In Fig. 6.18(c), the scales of the

reconstructed structures differ from the ground truth, since the proposed method

produces more points on the ground plane, so it makes the ground plane estimation

more accurate.

2) Free Parking Space Detection Results

The proposed system was applied to 154 real image sequences taken in the

various situations. The ground planes were covered with asphalt, soil, snow, standing

water, parking markers, etc. The automobiles varied in color from dark to bright and

they included sedans, SUVs, trucks, vans, buses, etc. The environment included

various types of buildings, vehicles, trees, etc. Fig. 6.19 shows six successful

examples. In this figure, the detected parking spaces are depicted on the last frames of

the image sequences and corresponding rearview structures. To decide whether the

system succeeded, we displayed the detected free parking space on the last frame of

the image sequence. If it was located inside the free space between two adjacent

vehicles, the result was considered to be a success. In this way, the system succeeded

in 139 situations and failed in 15 situations, so the success rate was 90.3%.

Page 163: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

145

-400 -200 0 200 400 600-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (c

m)

-400 -200 0 200 400 600

-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (c

m)

-200 0 200 400 600 800

-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (c

m)

(a) (b) (c)

-200 0 200 400 600 800-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (

cm)

-200 0 200 400 600 800-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (c

m)

-400 -200 0 200 400 600-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (

cm)

-400 -200 0 200 400 600-100

0

100

200

300

400

500

600

700

800

900

1000

X (cm)

Z (

cm)

(d) (e) (f)

Fig. 6.19: Successful detection examples; (a), (b), (c), (d), (e), and (f) are the free

parking spaces on the last frames of the image sequences and corresponding rearview

structures. (0, 0) indicates the camera center.

Fig. 6.20 shows four types of failures. In Fig. 6.20(a), the sun was strongly

reflected on the surface of the adjacent vehicle and the ground plane, so feature point

tracking failed. In Fig. 6.20(b), the adjacent vehicle was very dark and it was located

in a shadowy region, so few feature points were detected and tracked on the

Page 164: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

146

automobile surface. In Fig. 6.20(c), the free parking space was very far from the

camera, so the side of the white car was more precisely reconstructed than that of the

silver van. This caused false detection. In Fig. 6.20(d), part of the ground plane

(darker region) on the parking space was repaved with asphalt, so the ground plane

was not flat. This made the ground plane estimation erroneous. Out of fifteen failures,

three could be depicted by Fig. 6.20(a), nine could be depicted by Fig. 6.20(b), two

could be depicted by Fig. 6.20(c), and one could be depicted by Fig. 6.20(d).

(a) (b) (c) (d)

Fig. 6.20: Four types of failures; (a) reflected sunlight. (b) dark vehicle under a

shadowy region. (c) far parking space. (d) uneven ground plane.

3) Accuracy of Adjacent Vehicle Detection

Since the free parking space detection result depends on the estimation of the

corner point and orientation, we calculated the errors of these two values for accuracy

evaluation. The ground truth of the corner point and orientation were manually

obtained by using laser scanner data. The error of the corner point is the Euclidean

distance from the estimated point to the measured point and the error of the

orientation is the absolute difference between the estimated angle and the measured

angle. For this evaluation, 47 image sequences and the corresponding laser scanner

data were used because 6 image pairs among 53 failed to detect free parking spaces

Page 165: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

147

due to the reasons mentioned in the previous section. The corner point and the

orientation of the adjacent vehicle were estimated 10 times for each image sequence.

This is because the reconstructed structure can differ slightly every time due to the

parameter estimation results.

In Fig.6.21, the corner point error and the orientation error are depicted as

histograms. The average and maximum errors of the corner point were 14.9 cm and

42.7 cm, respectively. The distance between the corner point and the camera center

was between 281.4 cm and 529.2 cm. Since the lateral distance between two adjacent

vehicles is approximately between 280 cm and 300 cm in a usual parking situation,

there are about 50 cm extra room on each side of the vehicle. This means that even

the maximum error of the corner point is acceptable for the free parking space

localization. The average and maximum errors of the orientation were 1.4° and 7.7°,

respectively. The average error of the orientation was acceptable but the maximum

error of the orientation was somewhat large. This is because the side surfaces of

automobiles sometimes show few corresponding points due to featurelessness and

this makes the orientation estimation difficult. This evaluation shows that the

proposed system produces acceptable results for detecting free parking spaces. For

the worse cases, we are planning to refine the detection results when automobiles

move backward into parking spaces.

Page 166: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

148

0 10 20 30 40 500

5

10

15

20

25

30

35

40

45

Corner point error (cm)

Fre

quen

cy c

ount

Mean: 14.9 cm

0 1 2 3 4 5 6 7 8

0

5

10

15

20

25

30

35

40

45

Orientation error (degree)

Fre

quen

cy c

oun

t

Mean: 1.4

(a) (b)

Fig. 6.21: Accuracy evaluation results; (a) Corner point error. (b) Orientation error.

Compared to previous work, proposed motion stereo-based method makes

three contributions. First, we solved the serious degradation of 3D structures near the

epipole. Second, we presented an efficient method for detecting free parking spaces in

3D point clouds. Third, our system did not use odometry due to its unreliability. In the

experiments, our system showed a 90.3% success rate and the accuracy evaluation

showed that this system produced acceptable results.

Page 167: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

149

6.3.4 Experimental Results of Scanning Laser Radar-Based Method

The authors installed scanning laser radar (SICK LD-OEM) on the left side of

the experimental vehicle’s rear-end, as shown in Fig. 8.21. A brief specification of this

sensor is as follows: field of view is 360°, angular resolution is 0.125°, range

resolution is 3.9mm, maximum range is 250m, data interface is controller area

network (CAN), and laser class is 1 (eye-safe). The authors installed two cameras for

rearview and side-view, respectively, as shown in Fig. 6.22 to record the experimental

situation. These cameras were used only for analysis.

Fig. 6.22: Sensor installation.

Experiments were focused on situations that earlier mentioned systems could

not solve, including daytime/nighttime, outdoors/indoors, and conditions affected by

the sun. We tested our system in 112 situations and confirmed that our system was

able to designate target parking position at desired location in 110 situations.

Therefore, the recognition rate is 98.2%. The average processing time on PC with

Page 168: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

150

1GHz operation frequency is 615.3msec. To show the feasibility of the system,

typical situations are listed in the subsequent paragraphs with illustrating rearview

image. Designated target parking position is depicted by a rectangle on range data.

Fig. 6.23 shows a situation outdoors in cloudy weather. Acquired image was

dark overall and cloud image reflected on vehicle surface made vision-based feature

matching difficult. However, recognition results showed that the proposed system

designated target parking position successfully and was not affected by bad weather

conditions. It is noteworthy that although the range data of the vehicle located at the

right side of the free parking space was distorted severely near the vertex because of a

headlamp, developed round corner detection could recognize precisely the round

corner irrespective of this condition.

Fig. 6.23: Cloudy day at outdoor parking lot.

Fig. 6.24 shows that the range data located at the left side of the free parking

space was noisy and disconnected near the tires. As proposed round corner detection

used only the front cluster of the vehicle’s range data, it could recognize the round

Page 169: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

151

corner properly. Furthermore, although two vehicles adjacent to the free parking

space had considerably different locations in depth, the proposed outward border

method established target parking position at a reasonable location in depth.

-2 0 2 4 6

0

1

2

3

4

5

6

7

x (m)

y (m

)

0.017885

x

y

Fig. 6.24: Case when range data is disconnected and noisy.

Figs. 6.25 and 6.26 show cases wherein there are various objects around the

free parking space. Objects, such as a building, stairs, a tree, and a flower garden

distress image understanding and impart extra complexity to vision-based object

recognition. Especially, the repetitive patterns of the building, fans, windows, and

trees confuse vision-based feature matching. Furthermore, 3D information belonging

to objects unrelated to free parking space causes failure of model-based vehicle

recognition. The proposed method was able to focus on vehicle and pillar by ignoring

Page 170: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

152

range data clusters with high fitting errors in the corner detection phase.

Fig. 6.25: Case when the adjacent vehicles have different depth position and there are

various objects around the free parking space.

Fig. 6.26: Outdoor parking lot in cloudy weather surrounded by various objects.

Fig. 6.27 shows a situation against the sun, which has been one of the most

difficult challenges to vision-based methods. The image captured from the camera

was saturated to white in some areas and black in others owing to the sun’s glare.

Furthermore, the sun made strong reflections on the vehicle’s surface, which would

Page 171: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

153

change the shape and location with respect to viewing position. Even in such a

situation, the proposed method had no problem because the scanning laser radar was

not affected by the visual conditions and resultant range data was not degraded. It is

noteworthy that although the vehicle located at the right side of free parking space

had a round front-end, proposed round corner detection successfully recognized it.

Fig. 6.27: Case against the sun and vehicle with round front-end.

Figs. 6.28 and 6.29 show the results of target parking position designation

when viable free parking space was located between vehicle and pillar in an

underground parking lot. Figs. 6.28 and 6.29 correspond to free parking space

between vehicle and pillar, and between pillar and vehicle, respectively, in an

underground parking lot. Even in such a case, the proposed system used the same

algorithm and successfully designated target parking positions.

Page 172: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

154

Fig. 6.28: Free parking space is between vehicle and pillar in underground parking lot.

Fig. 6.29: Free parking space is between pillar and vehicle in underground parking lot.

Fig. 6.30 shows the case wherein the adjacent vehicle in a dark underground

parking lot was black. In such a case, vision-based feature detection of the black

vehicle would be very hard, and light stripe detection of light projection method was

error-prone. Especially, because scanning laser radar was installed at the left side of

the subjective vehicle’s rear end, the incident angle between the side surface of the

Page 173: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

155

right vehicle and scanning laser radar was very large. However, scanning laser radar

could detect useful range data sufficient for the recognition of free parking space.

Fig. 6.30: Case with black vehicle in underground parking lot.

Fig. 6.31 and 6.32 show cases wherein one of two parked vehicles is a light

truck and the other is a truck, respectively. As the vehicles adjacent to free parking

space can belong to various vehicle types such as sedan, sport utility vehicle (SUV),

van, light truck, and truck, vision-based model matching cannot help being

complicated. The proposed system could designate target parking position because it

does not consider the appearance of the vehicle. Furthermore, as every large truck is

enforced to install guard-rail on its lower part because of safety reasons, the proposed

system can recognize the contour of large truck.

Page 174: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

156

Fig. 6.31: Free parking space is between a light truck and a van.

Fig. 6.32: Free parking space is between a sedan and a large truck. The background

includes a flower garden and apartment with repetitive pattern.

Fig. 6.33 shows one of two failed cases. The major reason why the proposed

system failed to designate target parking space in this case was the dissatisfaction of

the basic assumption that the main reference corner appeared as an ‘L’-shape in range

data. The arrow from the coordinate system origin depicts the ray direction from

Page 175: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

157

scanning laser radar to main reference corner and the dotted ellipse depicts the region

without range data corresponding to the front of the parked vehicle. As the incident

angle of laser beam upon the front was almost 90°, the scanning laser radar could

acquire no range data of the surface. However, such situation is very rare; only two

cases occurred during 112 test situations.

Fig. 6.33: One of two failed situations. As the laser beam direction was almost

perpendicular to the normal direction of the parked vehicle’s front, corresponding

range data could not be acquired.

Page 176: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

158

Chapter 7

Conclusion

7.1 Contributions

Contributions of this dissertation can be summarized as below:

1) Our achievements cover almost every approach for target position designation

method of perpendicular parking situation except the infrastructure-based method.

As each achievement is creative and shows better performance than competitors’,

we expect they establish a bridgehead for the participation in the next generation

market.

2) As drag and drop interface-based method and semi-automatic markings

recognition-based method is not only original but also practical, they are expected

to make us acquire competitive power in the near future. Particularly, as these two

approaches do not require additional hardware and can be ported into

microprocessor-graded ECU (Electronic Control Unit), they can be applied to

products being developed. The drag and drop interface was compared with arrow

button-based method; it was observed that the drag and drop-based method reduces

the operation time by 17.6% and 29.4% for garage and parallel parking situation,

respectively. The semi-automatic markings recognition-based methods were

originally proposed by us and showed that the usage of driver’s input could reduce

required computational power and memory enormously.

3) Although the price problem seems to still need time, the scanning laser radar-based

Page 177: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

159

solution shows the ultimate performance for perpendicular parking situation; we

tested our system in 112 situations and confirmed that our system was able to

designate target parking position at desired location in 110 situations. The

recognition rate is 98.2% and the average processing time on PC with 1GHz

operation frequency is 615.3msec. Furthermore, the technology can be extended to

ISRSS (Integrated Side and Rear Safety System), which performs four functions:

perpendicular parking, parallel parking, BSD (Blind Spot Detection), and RCW

(Rear Collision Warning) [43].

4) Light stripe projection-based method provides an economical solution for

underground parking lots, which has been the headache of vision-based approach.

Particularly, as the developed method can be implemented just by installing NIR

(Near Infra-Red) light plane generator module, it can be easily integrated into

current system configuration.

5) The developed motion stereo-based method identifies the cause of the degradation

of 3D information reconstructed by motion stereo. Furthermore, it solves the

problem by eliminating erroneous 3D information and mosaicking consecutive

results. The proposed system was applied to 154 real image sequences taken in the

various situations. The system succeeded in 139 situations; the recognition rate was

90.3%. For 47 image sequences, laser scanner data was recorded and the accuracy

of established target position was verified; the average and maximum errors of the

corner point were 14.9 cm and 42.7cm. The average and maximum errors of the

orientation were 1.4° and 7.7°.

6) Full-automatic markings recognition-based method and binocular stereo-based

Page 178: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

160

method achieves a potential solution for full-automatic parking assistant system.

Particularly, the developed binocular stereo-based method shows a possibility of

the approach using error-containing disparity information and parking slot

markings simultaneously.

7.2 Future Works

Although achievements mentioned above can provide the base for our own

parking assistant system, there remain a lot of things to do for practical application:

1) In order to guarantee the robustness of monocular vision-based methods utilizing

the homography between captured image and the ground surface, self-calibration of

rearview camera should be devised. The tilt angle and height of the camera could

be changed by the passengers, burdens, and tire air-pressures.

2) For full-automatic parking assistant system, a reliable mechanical measure, which

evaluates the properness of recognized parking slot from the viewpoint of path

planning and tracking, should be devised.

3) In order to apply laser scanning radar-based, binocular stereo-based and light stripe

projection-based to actual systems, economical and reliable optical component and

camera module are essential.

4) Currently, the bottleneck of motion stereo-based method, that is, LKT tracking

algorithm, is being implemented into VHDL (VHSIC Hardware Description

Language). The implementation will be ported into FPGA (Field Programmable

Gate Array) and the real-time implementation of the developed motion stereo-

based method will be tested. However, dark illumination condition and specular

Page 179: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

161

surface problems should be solved before commercial application.

5) With respect to binocular stereo and motion stereo-based method, to avoid specular

surface problem, silhouette-based method or region-based stereo should be

investigated.

6) As parking management system sensing vacant slots and informing them to driver

starts to be adopted by large shopping malls, the integration of parking assistant

system and the parking management system should be researched.

7.3 Prospective and Strategy

According to published papers and news, a prospective diagram of parking

assistant system could be depicted like Fig. 9.1. The horizontal axis is time and the

vertical axis is automation level. Related technologies are grouped into three:

monitoring-related, parking slot markings-related, and free space-related.

First, rearview monitoring camera will be spread abroad [90] and newly adopted

surround monitoring system will compete with it. However, until the surround

monitoring system integrates obstacle information into displayed image, it can not

become popular because of obstacle’s distortion in bird’s eye view and higher cost.

Second, HMI-based method will be serviced until automatic parking designation

methods become robust and cost effective. Because HMI-based method uses a

rearview camera and a touch screen monitor, it can be easily replaced with semi-

automatic parking slot markings recognition-based methods. As computing power of

ECU grows and recognition algorithm becomes robust, semi-automatic parking slot

markings recognition-based method will be surely replaced by full-automatic parking

Page 180: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

162

slot markings recognition-based method.

Third, ultrasonic sensor-based method will be dominant for some time in free

space between vehicles-based methods. Particularly, developed and being developed

ultrasonic sensors are thought to satisfy performance requirement of parallel parking

situation [4], [5], [16], [27]. Contrarily, it is uncertain that which technologies will be

used in the next generation and how many users need precise free space recognition

technologies. Traditional passive vision-based method, i.e. binocular or motion

stereo-based, should find solutions for harsh illumination condition, specular vehicle

surface, and vehicle painted black. Comparatively, active vision-based methods are

expected to solve their pitfalls in the near future [46]. Light stripe projection might be

the temporary solution of underground parking lots.

As mentioned in the first chapter of this dissertation, one method almighty for

now and future does not exist. Therefore, proper combinations of various

technologies should be devised to cope with evolving market situation. I would like to

suggest four-step evolutionary strategy as below:

1) Ultrasonic sensor-based free space recognition-based + rearview monitoring

camera.

2) Ultrasonic sensor-based free space recognition-based for parallel parking + HMI-

based and semi-automatic parking slot markings recognition-based.

3) Scanning laser radar-based + HMI-based and semi-automatic parking slot

markings recognition-based.

4) Scanning laser radar-based + HMI-based and full-automatic parking slot markings

recognition-based.

Page 181: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

163

Fig. 7.1: prospective diagram of parking assistant system.

Time

Rearview monitoring camera

Surround monitoring system

The present time The near future The next generation

HMI-based method

Semi-automatic parking slot markings recognition-based

Full-automatic parking slot markings recognition-based

Ultrasonic sensor-based Free space recognition

Binocular or motion stereo-based Free space recognition

Light stripe projection-based Free space recognition

Scanning laser radar-based Free space recognition

Au

tom

atio

n L

evel

Surround monitoring system With Obstacle Information

Page 182: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

164

Fig. 9.2 shows the state diagram of target position designation procedure from

the driver’s perspective. If the system employs four methods, driver could establish

target position using ultrasonic sensor for parallel parking situation and using semi-

automatic markings recognition-based or drag and drop interface-based for garage

parking situation. If the parking lots are underground, light stripe projection-based

method could be used. As the three free space-based methods (motion stereo-based,

binocular stereo-based and light stripe projection-based) are aimed to the same

situation, they are depicted in one ellipse. Among them, light stripe projection-based

method shows the most possibility. Fig. 9.2 does not show only complementary

aspect but also evolutionary aspect: ultimately, scanning laser radar-based method

will be responsible for free space-based methods and full-automatic markings

recognition-based will be employed generally.

Fig. 7.2: Combination and evolutionary strategy

Page 183: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

165

Bibliography

[1] Randy Frank, “Sensing in the Ultimately Safe Vehicle”, SAE Paper No.: 2004-

21-0055.

[2] Tori Tellem, “Top 10 High-Tech Car Technologies”,

http://www.edmunds.com/reviews/list/top10/114984/article.html, Jul. 23, 2007.

[3] Yuri Kageyama, “Look, no hand! New Toyota parks itself”,

http://www./cnn.com, Jan. 14, 2004.

[4] Tim Moran, “Self-parking technology hits the market”, Automotive News, Vol. 8

Issue 6227, 22-22, Oct. 2006.

[5] Tim Moran and Jens Meiners, “The arrival of problem-free parking”,

Automotive News Europe, Vol. 11 Issue 24, 8-8, Nov. 2006.

[6] Masayuki Furutani, “Obstacle Detection Systems for Vehicle Safety”, SAE

Paper No.: 2004-21-0057.

[7] Shoji Hiramatsu, Akihito Hibi, Yu Tanaka, Toshiaki Kakinami, Yoshifumi Iwata,

and Masahiko Nakamura, “Rearview Camera Based Parking Assist System with

Voice Guidance”, SAE Paper No.: 2002-01-0759.

[8] Nobuyuki Ozaki, “Overhead-view Surveillance System for Driving Assistance”,

12th World Congress on Intelligent Transport Systems, Nov. 2005.

[9] Hiroaki Shimizu and Hirohiko Yanagawa, “Overhead View Parking Support

System”, DENSO Technical Review, Vol. 11, Issue 1, 2006.

[10] Clinton Deacon, “New Mercedes CL Class: Parking Assistance Technology”,

http://www.worldcarfans.com, Aug. 29, 2006.

[11] Masaki Wada, Xuchu Mao, Hideki Hashimoto, Mami Mizutani, and Masaki

Page 184: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

166

Saito, “iCAN: Pursuing Technology for Near-Future ITS”, IEEE Intelligent

Systems, Vol. 19, Issue 1, 18-23, Jan.-Feb. 2004.

[12] Masaki Wada, Kang Sup Yoon, and Hideki Hashimoto, “Development of

Advanced Parking Assistance System”, IEEE Transaction on Industrial

Electronics, Vol. 50, Issue 1, 4-17, Feb. 2003.

[13] Wei Chia Lee and Torsten Bertram, “Driver Centered Design of an Advanced

Parking Assistance”, 5th European Congress and Exhibition on Intelligent

Transportation Systems and Services, Jun. 1-3, 2005.

[14] Yoshirou Tsuruhara, “Honda Develops Park Assist System”, Nikkei Automotive

Technology, Sep. 29, 2006.

[15] J. Pohl, M. Sethsson, P. Degerman, and J. Larsson, “A Semi-automated Parallel

Parking System for Passenger Cars”, Proceedings of Institution of Mechanical

Engineers, Part D: Journal of Automobile Engineering Vol. 220, Issue 1, 53-65,

Jan. 2006.

[16] Carten Heilenkötter, Norbert Höver, Peter Magyar, Thomas Ottenhues, Tilmann

Seubert, and Joachim Wassmuth, “The Consistent Use of Engineering Methods

and Tools”, Auto Technology, 6/2007, 52-55.

[17] Ho Gi Jung, Chi Gun Choi, Pal Joo Yoon, and Jaihie Kim, “Semi-Automatic

Parking System Recognizing Parking Lot Markings”, The 8th International

Symposium on Advanced Vehicle Control (AVEC’06), 947-952, Aug. 20-24,

2006.

[18] Chi Gun Choi, Dong Suk Kim, Ho Gi Jung, and Pal Joo Yoon, “Stereo Vision

Based Parking Assist System”, SAE Paper No.: 2006-01-0571.

Page 185: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

167

[19] Ho Gi Jung, Chi Gun Choi, Dong Suk Kim, and Pal Joo Yoon, “System

Configuration of Intelligent Parking Assistant System”, 13th World Congress on

Intelligent Transportation Systems and Services, Oct. 8-12, 2006.

[20] Alexander Schanz, “Fahrerassistenz zum automatischen Parken”,

http://www.gyrosmafia.de/cms/front_content.php?idcat=74&idart=381, Jul. 23,

2007.

[21] Tracy Dawson, “Safe Parking Using the BMW Remote Park Assist”,

http://www.buzzle.com, Aug. 18, 2006.

[22] Ho Gi Jung, Chi Gun Choi, Pal Joo Yoon, and Jaihie Kim, “Novel User

Interface for Semi-automatic Parking Assistance System”, 31st FISITA World

Automotive Congress, Oct. 22-27, 2006.

[23] Jin Xu, Guang Chen, and Ming Xie, “Vision-Guided Automatic Parking for

Smart Car”, Proceedings of the IEEE Intelligent Vehicle Symposium 2000, 725-

730, Oct. 3–5, 2000.

[24] Yu Tanaka, Mitsuyoshi Saiki, Masaya Katoh, and Tomohiko Endo,

“Development of Image Recognition for a Parking Assist System”, 13th World

Congress on Intelligent Transportation Systems and Services, Oct. 8–12, 2006.

[25] Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, and Jaihie Kim, “Structure Analysis

Based Parking Slot Marking Recognition for Semi-automatic Parking System”,

Lecture Note in Computer Science Vol. 4109, 384-393, Aug. 2006.

[26] Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, and Jaihie Kim, “Parking Slot

Markings Recognition for Automatic Parking Assist System”, IEEE Intelligent

Vehicles Symposium 2006, 106–113, Jun. 13-15, 2006.

Page 186: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

168

[27] Hisashi Satonaka, Masato Okuda, Syoichi Hayasaka, Tomohiko Endo, Yu

Tanaka, and Toru Yoshida, “Development of Parking Space Detection Using an

Ultrasonic Sensor”, 13th World Congress on Intelligent Transportation Systems

and Services, Oct. 8-12, 2006.

[28] Pär Degerman, Jochen Pohl, and Magnus Sethson, “Hough Transform for

Parking Space Estimation Using Long Range Ultrasonic Sensors”, SAE Paper

No.: 2006-01-0810.

[29] Pär Degerman, Jochen Pohl, and Magnus Sethson, “Ultasonic Sensor Modeling

for Automatic Parallel Parking Systems in Passenger Cars”, SAE Paper No.:

2007-01-1103.

[30] Stefan Görner and Hermann Rohling, “Parking Lot Detection with 24GHz

Radar Sensor”, 3rd International Workshop on Intelligent Transportation, Mar.

14-16, 2006.

[31] Asako Hashizume, Shinji Ozawa, and Hirohiko Yanagawa, “An Approach to

Detect Vacant Parking Space in a Parallel Parking Area”, 5th European

Congress and Exhibition on Intelligent Transportation Systems and Services,

Jun. 1-3, 2005.

[32] Katia Fintzel, Reny Bendahan, Christophe Vestri, and Sylvian Bougnoux, “3D

Vision System for Vehicle”, Proceedings of the IEEE Intelligent Vehicle

Symposium 2003, Jun. 9-11, 2003.

[33] Katia Fintzel, Reny Bendahan, Christophe Vetsri, and Sylvian Bougnoux, “3D

Parking Assistant System”, 2004 IEEE Intelligent Vehicles Symposium, Jun. 14-

17, 2004.

Page 187: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

169

[34] Christophe Vestri, Sylvian Bougnoux, Reny Bendahan, Katia Fintzel, Seba

Wybo, Fred Abad, and Toshiakai Kakinami, “Evaluation of a Vision-Based

Parking Assistance System”, Proceedings of the 8th International IEEE

Conference on Intelligent Transportation Systems, Sep. 13-16, 2005.

[35] Christophe Vestri, Sylvian Bougnoux, Reny Bendahan, Katia Fintzel, Seba

Wybo, Fred Abad, and Toshiakai Kakinami, “Evaluation of a Point Tracking

Vision System for Parking Assistance”, 12th World Congress on ITS, Nov. 6-10,

2005.

[36] Jae Kyu Shur, Kwanghyuk Bae, Jaihie Kim, and Ho Gi Jung, “Free Parking

Space Detection Using Optical Flow-based Euclidean 3D Reconstruction”,

International Association of Pattern Recognition (IAPR) Conference on

Machine Vision Application, May 16-18, 2007.

[37] Nico Kaempchen, Uwe Franke, and Rainer Ott, “Stereo Vision Based Pose

Estimation of Parking Lots Using 3D Vehicle Models”, 2002 IEEE Intelligent

Vehicle Symposium, 459–464, Vol. 2, Jun. 17-21, 2002.

[38] Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, and Jaihie Kim, “3D Vision System

for the Recognition of Free Parking Site Location”, International Journal of

Automotive Technology, Vol. 7, No. 3, 361-367, May 2006.

[39] Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, and Jaihie Kim, “Light Stripe

Projection based Parking Space Detection for Intelligent Parking Assist System”,

Proceedings of the 2007 IEEE Intelligent Vehicle Symposium, Jun. 13-15, 2007.

[40] Alexander Schanz, Andreas Spieker, and Klaus-Dieter Kuhnert, “Autonomous

Parking in Subterranean Garages—A Look at the Position Estimation”,

Page 188: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

170

Proceedings of 2003 IEEE Intelligent Vehicle Symposium, 253-258, Jun. 9-11,

2003.

[41] Uwe Regensburger, Alexander Schanz, and Thomas Stahs, “Three-dimensional

perception of environment”, U.S. Patent No.: US 7,230,640, Data of Patent: Jun.

12, 2007.

[42] Christopher Tay Meng Keat, Cédric Pradalier, and Christian Laugier, “Vehicle

Detection and Car Park Mapping Using Laser Scanner”, 2005 IEEE/RSJ

International Conference on Intelligent Robots and Systems, 2054-2060, Aug. 2-

6, 2005.

[43] Ho Gi Jung, Young Ha Cho, Pal Joo Yoon, and Jaihie Kim, “Integrated

Side/Rear Safety System”, 11th European Automotive Congress, May 30-Jun. 1,

2007.

[44] Kay Ch. Fuerstenberg, Dirk T. Linzmeier, and Klaus C. J. Dietmayer,

“Pedestrian Recognition and Tracking of Vehicles using a Vehicle based

Multilayer Laser Scanner”, 10th World Congress on Intelligent Transport

Systems, Nov. 2003.

[45] R. Halif and J. Flusser, “Numerically Stable Direct Least Squares Fitting of

Ellipses,” Department of Software Engineering, Charles University, Czech

Republic, 2000.

[46] Ibeo, “Serial Price”, http://www.ibeo-as.com/english/production_serialprice.asp,

Jul. 23, 2007.

[47] Automotive News, “Laser or Radar? – Battle for Supremacy in Vision Safety

Systems Heats Up”, Automotive News, Jun. 25, 2007.

Page 189: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

171

[48] Roland Schulz and Kay Fürstenberg, “Laser scanner for Multiple Applications

in Passenger Cars and Trucks”, 10th International Conference on Microsystems

for Automotive Applications, Apr. 2006.

[49] Ibeo, “Passenger Car Integration”, http://ibeo-

as.com/english/production_intergration_passangercar.asp, Jul. 23, 2007.

[50] Ho Gi Jung, Young Ha Cho, Pal Joo Yoon, and Jaihie Kim, “Scanning Laser

Radar-Based Target Position Designation for Parking Aid System”, IEEE

Transactions on Intelligent Transportation Systems, Accepted on 16 Mar. 2008.

[51] Jae Kyu Suhr, Ho Gi Jung, Kwanghyuk Bae, Jaihie Kim, “Automatic free

parking space detection by using motion stereo-based 3D reconstruction,

Machine Vision and Applications, Accepted on 4 Jun. 2007.

[52] J. Salvi, X. Armangué, and J. Batlle, “A comparative review of camera

calibration methods with accuracy evaluation”, Pattern Recognition 35(2002)

1617-1635, 2002.

[53] J. Y. Bouguet, “Camera Calibration Toolbox for Matlab”,

http://www.vision.caltech.edu/bouguetj/calib_doc/index.html.

[54] Ho Gi Jung, Yun Hee Lee, Pal Joo Yoon, and Jaihie Kim, “Radial Distortion

Refinement by Inverse Mapping-Based Extrapolation”, The 18th International

Conference on Pattern Recognition (ICPR’06), 20-24 Aug. 2006, pp. 106-113.

[55] Steven M. Key: Fundamentals of Statistical Signal Processing, Volume I:

Estimation Theory, Prentice Hall Inc. (1993), 193-195.

[56] William A. Barrett, and Kevin D. Petersen, “Houghing the Hough: peak

collection for detection of corners, junctions and line intersections”, Proceedings

Page 190: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

172

of the 2001 IEEE Computer Society Conference on Computer Vision and

Pattern Recognition (CVPR2001), Vol. 2, 2001, pages: II-302~II-309 vol.2.

[57] Claudio Rosito Jung, and Rodrigo Schramm, “Rectangle detection based on a

windowed Hough transform”, Proceedins of the XVII Brasilian Symposium on

Computer Graphics and Image Processing (SIBGRAPI’04), 17~20 Oct. 2004,

pages: 113~120.

[58] Andrew French, Steven Mills, and Tony Pridmore, “Condensation tracking

through a Hough space”, Proceedings of the 17th International Conference on

Pattern Recognition (ICPR’04), 23~24 Aug. 2004, pages: 195~198 vol. 4.

[59] Yasutaka Furukawa, and Yoshihisa Shinagawa, “Accurate and robust line

segment extraction by analyzing distribution around peaks in Hough space”,

Computer Vision and Image Understanding 92 (2003) pages: 1~25.

[60] Jongwoo Kim, Raghu Krishnapuram, “A robust Hough transform based on

validity”, Proceedings of the 1998 IEEE World Congress on Computational

Intelligence, 4~9 May 1998, pages: 1530~1535 vol.2.

[61] Point Grey Research’s Homepage, http://www.ptgrey.com.

[62] Gavrila, D. M., Franke, U., Woehler, C. and Goerzig, S., “Real-time vision for

intelligent vehicles”, IEEE Instrumentation & Measurement Magazine, Vol. 4,

No. 2, 2001, pp. 22-27.

[63] Franke, U. and Joos, A., “Real-time stereo vision for urban traffic scene

understanding”, Proceedings of IEEE Intelligent Vehicle Symposium 2000,

2000, pp. 273-278.

[64] Franke, U. and Kutzbach, I., “Fast stereo based object detection for stop&go

Page 191: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

173

traffic”, Proceedings of IEEE Intelligent Vehicle Symposium 1996, 1996, pp.

339-344.

[65] C. Mertz, J. Kozar, J. R. Miller, and C. Thorpe, “Eye-safe laser striper for

outside use,” in Proc. IEEE Intelligent Vehicle Symposium, Jun. 17-21, 2002, pp.

507-512, Vol. 2.

[66] Reinhard Klette, Karsten Schlüns, and Andreas Koschan, Computer Vision –

Three Dimensional Data from Images, 1998, Springer-Verlag.

[67] Royer, E., Lhuillier, M., Dhome, M., Lavest, J., “Monocular Vision for Mobile

Robot Localization and Autonomous Navigation”, International Journal of

Computer Vision, 74(3), 2007, pp. 237-260.

[68] Mouragnon, E., Dekeyser, F., Sayd, P., Lhuillier, M., Dhome, M. , “Real Time

Localization and 3D Reconstruction”, Proceedings of Computer Vision and

Pattern Recognition, Vol. 1, 2006, pp. 17-22.

[69] Mouragnon, E., Lhuillier, M., Dhome, M., Dekeyser, F., Sayd, P., “Monocular

vision based SLAM for mobile robots”, Proceedings of the 18th International

Conference on Pattern Recognition, Vol. 3, 2006, pp. 20-24.

[70] Lucas, B. D., Kanade, T., “An iterative image registration technique with an

application to stereo vision”, Proceedings of the 7th International Joint

Conference on Artificial Intelligence, 1981, pp.674-679.

[71] Tomasi, C., Shi, J., “Good features to track”, Proceedings of IEEE Conference

on Computer Vision and Pattern Recognition, 1994, pp. 593-600.

[72] Barron, J., Fleet, D., Beauchemin, S., “Performance of optical flow techniques”,

International Journal of Computer Vision, 12(1), 1994, pp. 43-77.

Page 192: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

174

[73] McCane, B., Novins, K., Crannitch, D., Galvin, B., “On Benchmarking Optical

Flow”, Computer Vision and Image Understanding, 84(1), 2001, pp. 126-143.

[74] Correia, M. V., Campilho, A. C., “Real-Time Implementation of an Optical Flow

Algorithm”, Proceedings of the 16th International Conference on Pattern

Recognition, Vol. 4, 2002, pp. 247-250.

[75] Díaz, J., Ros, E., Ortigosa, E. M., Mota, S., “FPGA-Based Real-Time Optical

Flow System”, IEEE Transactions on Circuits and Systems for Video Technology,

16(2), 2006, pp. 274-279.

[76] Maya-Rueda, S., Arias-Estrada M., “FPGA processor for real-time optical flow

computation”, Lecture Notes in Computer Science, Vol. 2778, 2003, pp. 1103-

1016.

[77] Nister, D., “Frame Decimation for Structure and Motion”, Lecture Notes in

Computer Science, Vol. 2018, 2001, pp. 17-34.

[78] Royer, E., Lhuillier, M., Dhome, M., Lavest, J., “Monocular Vision for Mobile

Robot Localization and Autonomous Navigation”, International Journal of

Computer Vision, 74(3), 2007, pp. 237-260.

[79] Torr, P., Fitzgibbon, A., Zisserman, A., “The Problem of Degeneracy in

Structure and Motion. Recovery from Uncalibrated Image Sequences.”,

International Journal of Computer Vision, 32(1), 1999, pp. 27-44.

[80] Torr, P. and Murray, D. “The development and comparison of robust methods

for estimating the fundamental matrix”, International Journal of Computer

Vision, 24(3), 1997, pp. 271-300.

[81] Huber. P., Robust Statistics, Wiley, New York, 1981.

Page 193: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

175

[82] Hartley, R., Zisserman, A., Multiple view geometry in computer vision,

Cambridge University Press, 2000.

[83] Triggs, B., McLauchlan, P., Hartley, R., Fitzgibbon, A., “Bundle Adjustment –

A Modern Synthesis”, Lecture Notes in Computer Science, Vol. 1883, 2000, pp.

298–372.

[84] Fitzgibbon, A., Cross, G., Zisserman, A., “Automatic 3D model construction for

turn-table sequences”, Proceedings of European Workshop on 3D Structure from

Multiple Images of Large-Scale Environments, 1998, pp. 155-170.

[85] Trucco, E., Verri, A., Introductory techniques for 3-D computer vision, Prentice

Hall, 1998.

[86] Umeyama, S., “Least-Square Estimation of Transformation Parameters Between

Two Point Patterns”, IEEE Transactions on Pattern Analysis and Machine

Intelligence, 13(4), 1991, pp. 376-380.

[87] Fischler, M., Bolles, R., “Random sample consensus: A paradigm for model

fitting with applications to image analysis and automated cartography.”,

Communications of the ACM, 24(6), 1981, pp. 381-395.

[88] Burger, W., Bhanu, B. “Estimating 3-D egomotion from perspective image

sequences”, IEEE Transactions on Pattern Analysis and Machine Intelligence,

12(11). 1990, pp. 1040-1058.

[89] SICK, LD-OEM1000 / Laser Measurement Sensors, available at

http://www.sick.com, Accessed on 9 Apr. 2007.

[90] Terry Costlow, “Shifting into active mode”, Automotive Engineering

International, Jun. 2007, pp. 44-46.

Page 194: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

176

[91] Ho Gi Jung, Dong Suk Kim, Pal Joo Yoon, and Jaihie Kim, “Two-Touch Type

Parking Slot Marking Recognition for Target Parking Position Designation”,

IEEE Intelligent Vehicle Symposium 2008 (IV’08), 4-6 Jun. 2008.

Page 195: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

177

국국국국 문문문문 요요요요 약약약약

주차주차주차주차 보조보조보조보조 시스템을시스템을시스템을시스템을 위한위한위한위한 목표목표목표목표 위치위치위치위치 설정설정설정설정 방법방법방법방법 체계체계체계체계

본 학위논문은 새롭게 개발된 목표주차위치설정 방법들을 정리하고,

주차보조시스템의 전망을 제공한다. 그 업적들은 목표위치 설정방법에 대

한 거의 모든 접근법들을 포함한다. 각각의 업적들이 창의적이고 경쟁사에

비하여 좋은 성능을 보이기 때문에, 차세대 시장 진입을 위한 교두보 역할

을 할 것으로 기대된다.

개발된 방법들은 드래그 앤 드랍 (drag and drop) 인터페이스 기반 방법,

반자동 주차표시 인식 기반 방법, 전자동 주차표시 인식 기반 방법, 양안

스테레오 기반 방법, 광 줄무늬 투영법 기반 방법, 이동 스테레오 기반 방

법, 스캐닝 레이저 레이더 기반 방법을 포함한다. 특히, 드래그 앤 드랍 인

터페이스 기반 방법과 반자동 주차표시 인식 기반 방법은 효율적이고 간결

하기 때문에, 마이크로 프로세서만을 사용하는 ECU에도 바로 적용할 수

있을 것으로 기대된다.

스캐닝 레이저 레이더 기반 방법이 이중 가장 좋은 성능을 보였다. 그

러므로, 만약 ‘비교적 높은 가격’과 ‘비교적 큰 외관’이라는 단점을 극복할

수 있다면, 스캐닝 레이저 레이더는 새로운 응용분야를 개척할 수 있고, 주

차보조 시스템은 주차된 차 사이 빈 공간을 인식할 수 있는 믿을만한 방법

Page 196: Target Position Designation Methods for Parking Assistant Systemweb.yonsei.ac.kr/hgjung/Ho Gi Jung Homepage/Publications... · 2014-12-29 · Target Position Designation Methods for

178

을 확보할 수 있을 것이다. 개발된 방법은 새로운 직각 코너 검출법과 둥

근 코너 검출법에 기초하였는데, 이 방법들은 잡음에 강인하고 회전 및 스

케일에 무관하다.

광 줄무늬 투영법 기반 방법은 지하주차장을 위한 해결책으로서의 가

능성을 보여주었다. 개발된 이동 스테레오 기반 방법은 3차원 정보의 성능

저하 원인을 규명하고, 3차원 정보의 모자이크로 이를 해결할 수 있음을 보

였다. 전자동 주차표시 인식 기반 방법과 양안 스테레오 기반 방법은 정지

영상들을 이용하여 적합한 주차 위치를 설정할 수 있음을 보여주었다.

본 학위논문의 끝 부분에, 연구 기여점과 향후 과제를 정리하였으며,

출판물과 뉴스에 근거하여 기술전망도식을 작성하였다. 끝으로, 우리의 경

쟁력과 예상되는 기술 로드맵을 고려하여, 4단계 진화적 전략을 제안하였다.

핵심되는핵심되는핵심되는핵심되는 말말말말: 목표 주차위치 설정, 드래그 앤 드랍 인터페이스, 주차구획표

시, 스캐닝 레이저 레이더, 광 줄무늬 투영, 스테레오 비젼.