vehicle speed measurement based on gray constraint optical flow algorithm (j.ijleo.2013.06.036)

7
Optik 125 (2014) 289–295 Contents lists available at ScienceDirect Optik jou rn al homepage: www.elsevier.de/ijleo Vehicle speed measurement based on gray constraint optical flow algorithm Jinhui Lan a,, Jian Li a,, Guangda Hu a , Bin Ran b , Ling Wang a a Department of Instrument Science and Technology, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China b Department of Civil and Environmental Engineering, School of Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA a r t i c l e i n f o Article history: Received 3 February 2013 Accepted 20 June 2013 Keywords: Vehicle speed measurement Gray constraint optical flow algorithm Improved three-frame difference algorithm Moving detection a b s t r a c t Vehicle speed measurement (VSM) based on video images represents the development direction of speed measurement in the intelligent transportation systems (ITS). This paper presents a novel vehicle speed measurement method, which contains the improved three-frame difference algorithm and the proposed gray constraint optical flow algorithm. By the improved three-frame difference algorithm, the contour of moving vehicles can be detected exactly. Through the proposed gray constraint optical flow algorithm, the vehicle contour’s optical flow value, which is the speed (pixels/s) of the vehicle in the image, can be computed accurately. Then, the velocity (km/h) of the vehicles is calculated by the optical flow value of the vehicle’s contour and the corresponding ratio of the image pixels to the width of the road. The method can yield a better optical flow field by reducing the influence of changing lighting and shadow. Besides, it can reduce computation obviously, since it only calculates the moving target contour’s optical flow value. Experimental comparisons between the method and other VSM methods show that the proposed approach has a satisfactory estimate of vehicle speed. © 2013 Elsevier GmbH. All rights reserved. 1. Introduction Vehicle speed measurement plays an important role in the traf- fic information collection, and it is also a hot and difficult issue in ITS. At present, the vehicle speed measurement methods could be categorized into two types: (1) hardware-based methods and (2) software-based methods. The hardware-based methods consist of induction-coil loop speed measurement, laser speed measure- ment, radar speed measurement, etc. The software-based method is vehicle speed measurement by video images [1]. An induction-coil loop speed measurement method is a loop detector embedded into the ground. When the vehicles go across the loop, the loop’s magnetic field changes, and the detector can calculate the vehicle speed. Since the induction-coil loop must be directly embedded in the lane, it will easily disrupt the road when it is installed or maintained. And the loop is also affected by the freezing, foundation subsidence and other factors. Moreover, the measurement accuracy will be greatly reduced when traffic slows [2,3]. For laser speed measurement, a laser detector is located above Corresponding authors at: School of Automation & Electrical Engineering, Uni- versity of Science and Technology Beijing, Beijing 100083, China. Tel.: +86 010 62334961. E-mail addresses: [email protected], [email protected] (J. Lan), [email protected] (J. Li). the roadway, it detects the moving object’s distance and time for many times and then uses them to calculate the moving object’s speed, which is based on the light waves speed measurement [4]. The laser detector can work day and night under different weather conditions, but extreme temperatures could yield performance loss in their functionalities [2,5]. Radar speed measurement adopts the Doppler radar effect. When the emission source or the receiver has relative motion, the received signal frequency will change. Radar speed measurements have a high demand on the angle of installa- tion, which have limitations in the application [6]. For the vehicle speed measurement by video images, cameras are installed on the top of lane or placed alongside a roadway to shoot moving vehicles. The images captured by camera are ana- lyzed by the image processing and pattern recognition algorithm, which are used to calculate the vehicle speed. The common meth- ods used in the vehicle speed measurement by video images are corner detection, texture analysis, video tracking, and so on [7–12]. These methods can well find the matching location of the vehi- cle in the two consecutive image frames, but the large amount of computation cannot meet the requirements of real-time speed measurement. The method of virtual line can replace the real loop, but it is likely to create vehicle speed miscalculation because of the image frame acquisition period [13,14]. In addition, matching the different parts of vehicle in two consecutive images can affect the actual moving distance of the vehicle because of the size change of the bodies in the consecutive images [12,15,19]. 0030-4026/$ see front matter © 2013 Elsevier GmbH. All rights reserved. http://dx.doi.org/10.1016/j.ijleo.2013.06.036

Upload: nguyen-the-hung

Post on 11-Nov-2015

18 views

Category:

Documents


4 download

DESCRIPTION

Vehicle Speed Measurement

TRANSCRIPT

  • Optik 125 (2014) 289 295

    Contents lists available at ScienceDirect

    Optik

    jou rn al homepage: www.elsev ie

    Vehicle speed measurement based on gray constalgorith

    Jinhui La a

    a Department o , Unive100083, Chinab Department o Madis

    a r t i c l

    Article history:Received 3 FebAccepted 20 Ju

    Keywords:Vehicle speed Gray constrainImproved threMoving detect

    sed oportas the . By ttly. Te, whity (knding

    redu, sinc

    value. Experimental comparisons between the method and other VSM methods show that the proposedapproach has a satisfactory estimate of vehicle speed.

    2013 Elsevier GmbH. All rights reserved.

    1. Introdu

    Vehicle sc informain ITS. At prbe categori(2) softwareof inductionment, radaris vehicle sp

    An indudetector emthe loop, thcalculate thdirectly emit is installefreezing, fomeasureme[2,3]. For las

    Corresponversity of ScieTel.: +86 010 6

    E-mail add(J. Lan), gordon

    0030-4026/$ http://dx.doi.oction

    peed measurement plays an important role in the traf-tion collection, and it is also a hot and difcult issueesent, the vehicle speed measurement methods couldzed into two types: (1) hardware-based methods and-based methods. The hardware-based methods consist-coil loop speed measurement, laser speed measure-

    speed measurement, etc. The software-based methodeed measurement by video images [1].ction-coil loop speed measurement method is a loopbedded into the ground. When the vehicles go acrosse loops magnetic eld changes, and the detector cane vehicle speed. Since the induction-coil loop must bebedded in the lane, it will easily disrupt the road whend or maintained. And the loop is also affected by theundation subsidence and other factors. Moreover, thent accuracy will be greatly reduced when trafc slowser speed measurement, a laser detector is located above

    ding authors at: School of Automation & Electrical Engineering, Uni-nce and Technology Beijing, Beijing 100083, China.2334961.resses: [email protected], [email protected]@163.com (J. Li).

    the roadway, it detects the moving objects distance and time formany times and then uses them to calculate the moving objectsspeed, which is based on the light waves speed measurement [4].The laser detector can work day and night under different weatherconditions, but extreme temperatures could yield performance lossin their functionalities [2,5]. Radar speed measurement adopts theDoppler radar effect. When the emission source or the receiver hasrelative motion, the received signal frequency will change. Radarspeed measurements have a high demand on the angle of installa-tion, which have limitations in the application [6].

    For the vehicle speed measurement by video images, camerasare installed on the top of lane or placed alongside a roadway toshoot moving vehicles. The images captured by camera are ana-lyzed by the image processing and pattern recognition algorithm,which are used to calculate the vehicle speed. The common meth-ods used in the vehicle speed measurement by video images arecorner detection, texture analysis, video tracking, and so on [712].These methods can well nd the matching location of the vehi-cle in the two consecutive image frames, but the large amountof computation cannot meet the requirements of real-time speedmeasurement. The method of virtual line can replace the real loop,but it is likely to create vehicle speed miscalculation because of theimage frame acquisition period [13,14]. In addition, matching thedifferent parts of vehicle in two consecutive images can affect theactual moving distance of the vehicle because of the size change ofthe bodies in the consecutive images [12,15,19].

    see front matter 2013 Elsevier GmbH. All rights reserved.rg/10.1016/j.ijleo.2013.06.036m

    na,, Jian Lia,, Guangda Hua, Bin Ranb, Ling Wangf Instrument Science and Technology, School of Automation and Electrical Engineering

    f Civil and Environmental Engineering, School of Engineering, University of Wisconsin-

    e i n f o

    ruary 2013ne 2013

    measurementt optical ow algorithme-frame difference algorithmion

    a b s t r a c t

    Vehicle speed measurement (VSM) bameasurement in the intelligent transmeasurement method, which containgray constraint optical ow algorithmmoving vehicles can be detected exacthe vehicle contours optical ow valucomputed accurately. Then, the velocthe vehicles contour and the correspocan yield a better optical ow eld byit can reduce computation obviouslyr .de / i j leo

    raint optical ow

    rsity of Science and Technology Beijing, Beijing

    on, Madison, WI 53706, USA

    n video images represents the development direction of speedtion systems (ITS). This paper presents a novel vehicle speedimproved three-frame difference algorithm and the proposedhe improved three-frame difference algorithm, the contour ofhrough the proposed gray constraint optical ow algorithm,ich is the speed (pixels/s) of the vehicle in the image, can be

    m/h) of the vehicles is calculated by the optical ow value of ratio of the image pixels to the width of the road. The methodcing the inuence of changing lighting and shadow. Besides,e it only calculates the moving target contours optical ow

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCUnderline

    PCUnderline

    PCHighlight

    PCUnderline

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCUnderline

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCUnderline

    PCHighlight

    PCHighlight

  • 290 J. Lan et al. / Optik 125 (2014) 289 295

    In order to overcome the shortcomings of various methodsabove, this paper states a novel vehicle speed measurement methodbased on improved three-frame difference algorithm and the grayconstraint optical ow algorithm. Through the improved method,the contourvalue of thethe proposeow value method cantarget contoof interest puted. By ththe width ocalculated b

    The rest speed measwhere the iconstraint oexperimentin Section 4

    2. Vehicle

    The senstop of lane.obtained. Bthe proposemotion parabe compute

    2.1. Improv

    2.1.1. InterfInterfram

    detection bformance igenerated bthree-frame

    The prin(1)(3):

    p1(x, y) ={

    p2(x, y) ={

    p(x, y) = p

    ={

    In (1) anvalue of theimage framp(x,y) is the

    Althougtargets conis comparemethod is sand the gra

    Impr resud thep. Torove

    the trougn thrrove

    e diffond

    x, y)

    e diff thir

    x, y)

    e newon.

    x, y)

    e new the dilate

    x, y) =

    x, y) =

    x, y) =0, else

    (7)

    e result image p is derived from p3 and p6 via the xor oper-on.

    , y) = p3(x, y) p6(x, y) ={

    255, if(p3(x, y)! = p6(x, y))0, else

    (8)

    known that global threshold T for image binarization couldin distortion in a real image whose intensities vary gradually of vehicles can be detected exactly. The optical ow moving target contour can be computed precisely byd gray constraint optical ow algorithm. The optical

    is the speed (pixels/s) of the vehicle in the image. The reduce computation by only calculating the movingurs optical ow value. In the image, we draw the region(ROI) and the speed of the vehicle in the ROI is com-e corresponding relation between the image pixels andf the road, the speed (km/h) of the moving target isy the optical ow value.of this paper is organized as follows. In Section 2, vehicleurement method proposed in the paper is described,mproved three-frame difference method and the grayptical ow method are introduced. Section 3 presentsal results of our method and the results analysis. Finally,, the conclusion on the experimental results is drawn.

    speed measurement method

    or used in the paper is a video camera installed on the According to the camera, vehicles information can bey the improved three-frame difference algorithm andd gray constraint optical ow algorithm, the vehiclesmeters can be acquired. Then, the vehicles velocity cand by the method.

    ed three-frame difference method

    rame difference methode difference method is widely used in moving target

    y low computational complexity. In most cases, the per-s good. When target moves fast, the targets contoury two-frame video images produces overlap. So, the

    difference method is proposed [16].ciple of the three-frame difference method is shown in

    255, if|Fm(x, y) Fm1(x, y)| > T0, if|Fm(x, y) Fm1(x, y)| T

    (1)

    255, if|Fm+1(x, y) Fm(x, y)| > T0, if|Fm+1(x, y) Fm(x, y)| T

    (2)

    1(x, y) p2(x, y)

    255, if(p1(x, y) = 255&&p2(x, y) = 255)0, else

    (3)

    d (2), Fm1(x,y), Fm(x,y) and Fm+1(x,y) denote the gray m1 image frame, the m image frame and the m + 1e at the spatial pixel point (x,y). T is the threshold value.

    gray value of spatial pixel point (x,y) of target image.h the three-frame difference method can get the movingtour, the detected contour is partly overlapped, whichd to the real targets contour. The performance of thehown in Fig. 1. The green contour is the target contour,y areas are overlap regions in the m frame contour.

    2.1.2. The

    lap, anoverlaan impdetecttion thxor i

    Imp

    (1) Thsec

    p1(

    (2) Ththe

    p2(

    (3) Thati

    p3(

    (4) Thviato

    p4(

    p5(

    p6(

    (5) Thati

    p(x

    It isresult Fig. 1. Effect of three-frame differences overlap.

    oved three-frame difference methodlt of the two-frame difference method produces over-

    three-frame difference algorithm also produces partly eliminate the partly overlapped, this paper presentsd three-frame difference algorithm. The algorithm canarget contour precisely, and can get contour informa-h the operation of the difference, dilate, and andee consecutive frames.d three-frame difference algorithm steps are as follows:

    erence image p1 is obtained by the rst frame and theimage frames via the difference operation.

    ={

    255, if|Fm(x, y) Fm1(x, y)| > T0, if|Fm(x, y) Fm1(x, y)| T

    (4)

    erence image p2 is obtained by the second frame andd image frames via the difference operation.

    ={

    255, if|Fm+1(x, y) Fm(x, y)| > T0, if|Fm+1(x, y) Fm(x, y)| T

    (5)

    image p3 is derived from p1 and p2 via the and oper-

    = p1(x, y) p2(x, y)

    ={

    255, if(p1(x, y) = 255&&p2(x, y) = 255)0, else

    (6)

    image p6 is derived from the dilated images p4 and p5and operation. SE is the 3 3 squared template, used

    the image.

    p1(x, y) SE

    p2(x, y) SE

    p4(x, y) p5(x, y) ={

    255, if(p4(x, y) = 255&&p5(x, y) = 255)

    PCHighlight

    PCHighlight

    PCUnderline

    PCHighlight

    PCHighlight

    PCHighlight

    PCUnderline

    PCUnderline

    PCTypewritergin

    PCTypewriter

    PCTypewriter

    PCHighlight

    PCHighlight

  • J. Lan et al. / Optik 125 (2014) 289 295 291

    Fig. 2. Effect of improved three-frame difference.

    due to illumination. Therefore, we adopt the local adaptive thresh-old segmentation method to extract the target from background.The global adaptive threshold T is used to do the initial segmenta-tion. The thresholding value T for the whole image is determinedby the difference between the mean and standard deviations of thewhole image.

    Thresholding value T = mean std.The effect of improved three-frame difference algorithm is

    shown in Fig. 2. From the gure, the precision contour of targetcan be obtained by the method. So, the algorithm has a betterresult than three-frame difference algorithm. In particular, whenthe velocity of the moving target is faster, the effect is more obvious.

    To illustrate the superior performance of the improved method,experiments are done by the three kinds of frame differencemethod. Fig. 3 shows the original image, and they are 640 480pixels. The Fig. 4(a) shoFig. 4(b) shoand Fig. 4(c)paper.

    From Figframe differdifference mtour. The cois precise, wedge pointsrespectivelygood perfor

    For multi-target motion detection, the proposed algorithm canbe used as the initial partition, and then use the gray constraintoptical ow method to compute the moving targets velocity.

    2.2. Basic equation of optical ow and the improved algorithm

    The change of the gray value expressed by optical ow eld canbe used to observe the information of moving target in the image. Itcontains the information of the image points instantaneous veloc-ity vector of the target. According to the velocity vector, the velocityof the target can be calculated.

    2.2.1. Basic equation of optical owOptical ow is proposed by Gibson at rst, and it is the moving

    velocity of target in the gray mode. The calculation of moving tar-gets optical ow eld is based on two assumptions: (1) the grayvalue of a moving object is unchanged in a very short time; (2) eachpixel of the target has the same speed because of the targets rigidproperties [17].

    Suppose that the point P in the image coordinates at (x,y) movesto (x + dx, y + dy) after dt. The gray value is I(x,y,t) at (x,y) andI(x + dx, y + dy, t + dt) at (x + dx, y + dy). As dt is short, the gray valueis unchanged by the assumption, and it is shown in (9). Also, (9) iscalled the basic optical ow equation.

    I(x + dx, y + dy, t + dt) = I(x, y, t) (9)

    The left side of (9) is expanded using Taylors formula, and is as follows.

    x, y +

    r sim ow

    + Iy

    dx/dtspati

    0th im

    Fig. 4. Effect oimage.computed threshold value T is 102 from the images.ws the performance of two-frame difference method,ws the performance of three-frame difference method

    shows the performance of the method proposed in this

    . 4, it is easy to make the conclusion as follows: Two-ence method produces overlap obviously. Three-frameethod produces overlap at the corners of the cars con-

    ntour obtained by using this paper proposed algorithmhich is shown in Fig. 4(c). Also, the number of the white

    is 4530 in Fig. 4(a), 2573 in Fig. 4(b) and 2061 in Fig. 4(c),. So, the improved three-frame difference method hasmance.

    shown

    I(x + d

    Afteoptical

    I

    x

    dx

    dt

    u = image

    Fig. 3. The original image: (a) the 529th image, (b) the 53f difference methods: (a) two-frame difference method image, (b) three-frame differen dy, t + dt) = I(x, y, t) + Ix

    dx

    dt+ I

    y

    dy

    dt+ I

    t

    dt

    dt+ o(t2)

    (10)

    plication and ellipsis the quadratic term, the basic equation (9) is simplied to (11).

    dy

    dt+ I

    dt

    dt

    dt= 0 (11)

    and v = dy/dt are the moving speed of the pixel at theal pixel point (x, y) in horizontal and vertical directions.

    age, and (c) the 531st image.

    ce method image, and (c) improved three-frame difference method

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCUnderline

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCUnderline

    PCUnderline

  • 292 J. Lan et al. / Optik 125 (2014) 289 295

    Also, (u, v) is called for the optical ow eld. The basic optical owequation (11) can be expressed as (12) [18,19]

    (Ix, Iy)(u, v)T + It = 0 (12)

    where Ix =

    2.2.2. Gray In pract

    unchangedmet for theAccording tvalue is notow (9) anoptical ow

    I(r + r) =

    where I(r) =cient. C(r) is

    The gray(13):

    I(r + r) =

    If m(r)tion of the bpractical apis suited foow error i

    I = I(r +

    After de

    I = g +

    where g =In (16),

    objects), Ican be usedI. If the raaccurate. Othe affectiomation of tTherefore, teld distribform well uso on.

    Since LKow eld, Hof objects amethod for

    2.2.3. MotioTo meet

    same movinthe smooththe velocityis representthe purpose

    Es = [

    On the unchanged

    reasons. So, it requires that the proposed gray constraint opticalow error (15) is as small as possible, and it is expressed as (18).

    Ec =

    (I)2dxdy =

    (Ixu + Iyv + It)2 dx dy (18)

    eethnestical

    (E

    [(

    e pa(the

    errte, anEc. Cle sm

    corr

    n+1) +n+1) +

    O2 iation

    the he lote de comn be

    )u(n+

    )v(n+

    refored by

    = u(n

    = v(n

    n denf thee bere teue of, and

    u(n+

    tan

    rder alg

    aint oare ced byt to I/x, Iy = I/y, It = I/t.

    constraint optical ow algorithmical application, the assumption that the gray value is

    in a very short time in the optical ow eld cannot be reasons of block, multi-source, transparency and so on.o the generalized dynamic image mode (GDIM), the gray

    constant, but changed. So, the basic equation of opticald (11) does not hold, we propose the gray constraint

    equation, and it can be expressed as follows:

    M(r)I(r) + C(r) (13)

    I(x,y,t), M(r) = 1 + m(r), m(r) is the deviation coef- interference.

    constraint optical ow equation is shown as follows

    I(r) + m(r)I(r) + C(r) (14)

    = C(r) = 0, (14) meets the gray value unchanged assump-asic optical ow equation, and it is the same with (9). Inplication, the gray constraint optical ow equation (13)r the actual situation. Also, the gray constraint opticals shown as below:

    r) I(r) = m(r)I(r) + C(r) = Ixu + Iyv + It (15)

    formation, (15) is expressed as follows:

    It (16)

    (Ixu+Iyv).g is the geometric component (the speed of moving

    is the illumination component, and the ratio Y = g/I as the parameter of relative strength between g andtio is large, the velocity component estimated is quiten the contrary, the estimated speed is not accurate, andn of light is large. The ratio is signicant for the esti-he motion parameters and the structural parameters.he value of Y can be used to constrain the optical owution, which can make the optical ow algorithm per-nder a variety of challenging light, trafc condition, and

    (LucasKanade) method may not perform well in denseS (Horn and Schunck) method can detect minor motionnd provide a 100% ow eld. Thus, we focus on HS

    optical ow eld computation in our research [19].

    n constraint equations of Horn and Schunckthe assumptions that the optical ow eld caused by theg object should be smooth, Horn and Schunck propose

    ness constraint condition, which varies by the integral of components square. The smoothness constraint errored by (17). It calls for (17) to be as small as possible for

    of the smoothness [17,19].(u

    x

    )2+(

    u

    y

    )2+(

    vx

    )2+(

    vy

    )2]dxdy (17)

    other hand, the assumption that the gray value is in the optical ow eld cannot be met for many

    To msmootthe op

    E =

    +

    is therrors cal owaccurasis on desirab

    Thelows:

    Ix(Ixu(

    Iy(Ixu(

    whereof iterdenotefrom ttemplaonly th(20) ca

    (I2x +

    (I2y + The

    obtain

    u(n+1)

    v(n+1)

    wherevalue odistanctions athe valas (23)

    fp =

    = arc

    In ocal owconstrrithm obtainY is se the proposed gray constraint optical ow error and thes constraint error, Eq. (19) is set to nd the solution ofow (u,v).

    c + Es) dx dy = {

    (Ixu + Iyv + It)2

    u

    x

    )2+(

    u

    y

    )2+(

    vx

    )2+(

    vy

    )2]}dx dy = min

    (19)

    rameter that determines the weight between the twoerror of deviation from the smoothness Es and the opti-or Ec). When the image gray value of the measurement isd the can be made larger, which gives greater empha-onversely, if the image contains a lot of noise, is aall (less than 1) value.

    esponding Euler equation of (19) is expressed as fol-

    Iyv(n+1) + It) = 2u Iyv(n+1) + It) = 2v

    (20)

    s the Laplacian operator and n denotes the number. Suppose 2u = u(n+1) u(n), 2v = v(n+1) v(n). u, vaverage optical ow value of u, v, what can be derivedcal smoothing template of the images. The size of theepends on the requirements of the image. Considering

    putation, the size of the template is 3 3, the result in written as follows:

    1) + IxIyv(n+1) = u(n) IxIt1) + IxIyu(n+1) = v(n) IyIt

    (21)

    e, a natural iteration method solution of optical ow is (21).

    ) Ix(Ixu(n) + Iyv(n) + It) + I2x + I2y

    ) Iy(Ixu(n) + Iyv(n) + It) + I2x + I2y

    (22)

    otes the number of iteration, u0 and v0 denote the initial optical ow, which take zero. When the results of thetween two iterations are less than the threshold, itera-rminated. The optical ow is (u(n+1), v(n+1)). According to

    u(n+1) and v(n+1), the velocity in the pixel P is expressed also the pixel Ps direction is expresses as (24).

    1)2 + v(n+1)2 (23)

    v(n+1)

    u(n+1)(24)

    to show the effect of the proposed gray constraint opti-orithm, tests have been done by video image. The grayptical ow algorithm and the basic optical ow algo-ompared. In the tests, the moving targets contour is

    the improved three-frame difference method. The ratio1, and the parameter is 1. The images are shown in

    PCHighlight

    PCHighlight

    PCUnderline

    PCHighlight

    PCUnderline

    PCUnderline

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

  • J. Lan et al. / Optik 125 (2014) 289 295 293

    Fig. 5. (a) The original image, (b) effect of the improved three-frame and the basic optical ow algorconstraint optical ow algorithm.

    Fig. 5(b) and (c). In the gures, when the optical ow value fp isbigger than the threshold value, the optical ow eld is shown.

    From the gures, it is shown that the gray constraint algo-rithm is well-distributed in the optical ow eld and can performwell under a variety of challenging lighting and shadow. So, theimproved algorithm can work better than the basic optical owalgorithm.

    2.3. Vehicle

    In the vmajority of era need tocan be classmodel [20,2[2225].

    In the cvehicles areinstallationtion schema

    This papward and mIn the imaglength, instdelineated Fig. 7, the avertical dirlines of thepoint of thethe car.

    When thin the ROI ipose the imis the stand

    els i

    7.5 0 pi

    the Red t

    in evis comeed o

    1n

    np=1

    fp (26)

    n denotes the number of points in the moving targeted by the improved three-frame difference algorithm, fps the optical ow geometry value of the point of target. favrs the optical ow geometry value of the block. According tometric value fp of the moving block and the ratio k betweenage pixels and the width of the road, the vehicle velocity iss follows:

    r k (27)

    erimental results and analysis

    s paper presents a vehicle speed measurement method based improved three-frame difference algorithm and the pro- speed measurement

    ehicle speed measurement system, cameras are thevision devices used. According to that whether the cam-

    calibrate by manual, the vision-based VSM systemsied into two main models: (1) the calibrated camera1] [our system] and (2) the uncalibrated camera model

    alibrated camera model, the appropriate positions of estimated by the geometric parameters of focal length,

    angle and installation height. Fig. 6 shows the installa-tic diagram [1].er considers the issue that the camera is tilted down-ounted at a xed location on a bridge crossing the street.e, not all image is regions of interest. Adjusting the focalallation angle and installation height of the camera, wea region of interest (ROI) from the image. As shown inrea enclosed by green line is the detection area. In theection, the intersection point between the two white

    double-lane and the edge of the image is the middle ROI, and the region can contain the whole contour of

    e perspective effect is not considered, the width of roads equal to the image width (horizontal direction). Sup-age extracted from video is 640 480 pixels, and 7.5 mard width of double-lane national road. So the ratio of

    the pixbelow:

    k =64

    In improvpointstarget The sp

    favr =

    whereobtaindenotedenotethe geothe imgiven a

    V = fav

    3. Exp

    Thion theFig. 6. Installed schematic diagram.

    posed graycarried outenvironmenposed methprocessor afrom the cacamera tiltbridge crostest has 30 ithm, and (c) effect of the improved three-frame and the proposed

    Fig. 7. Region of interest.

    n the image and the width of the road are expressed as

    mxels

    = 7.5/640 (m/pixel) (25)

    OI, all detected moving targets are marked by thehree-frame difference method, and the speed of allery moving block are calculated, then the speed of theputed by the average speed of all points in the target.

    f the target is shown as (26). constraint optical ow algorithm. The method was by VC++6.0 with OpenCV, and ran in MS Windows XPt. Several tests were performed to validate the pro-od. All tests were done on a PC with 1.9 GHz Celeronnd 1024 MB RAM. The vehicles, move toward or awaymera in the two-lane, are shot by the SONY HDR-XR260Eed downward and mounted at a xed location on asing the target street at 6 m high. The video used in theframes rate and the resolution is 640 480 pixels.

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

    PCHighlight

  • 294 J. Lan et al. / Optik 125 (2014) 289 295

    Fig. 8. A vehicle in the ROI of the two-lane: (a) a vehicle move toward the cameraand (b) a vehicle move away from the camera.

    Fig. 9. Two vehicles in the ROI of the two-lane: (a) two vehicles move toward thecamera and (b

    3.1. Vehicle

    For evalcles are usea vehicles sdirections, Red arrowstwo differencompared wvehicles veby radar at Vradar Vois in the vel

    Table 1Vehicle move

    Velocity ran

    020 2040 4060 6080 80100 100120 120140 Above 140

    Table 2Vehicles move

    Velocity ran

    020 2040 4060 6080 80100 100120 120140 Above 140

    Table 3Average errors in different velocity range by the methods (%).

    Velocity range/km/h Method

    020 204040606080 80100 100120 120140 Above 140

    Experimspeed accuof the propthat the errBut when tof the resuperformanc

    From thrate of errothat is becawidth of ro

    ordinis 25d. Al

    mpa

    rderred tsult

    papel-ande thates nage ) two vehicles move away from the camera.

    speed measurement by the proposed method

    uating the performance of the proposed method, vehi-d to do experiments. Fig. 8 shows the performance ofpeed measurement in the double-lane in two differentFig. 9 shows two vehicles speed measurement effect.

    denote the velocity of vehicle. 1000 cars velocity fromt directions is measured. The experimental results areith the velocity measured by radar. We estimate the

    locity Vopt by the method and measure the velocity Vradarthe same time. The error is computed by the equationpt

    /Vradar . The average error is statistics when the Vradar

    Accframe metho

    3.2. Co

    In ocompaVSM re

    In verticapensatelimining imocity range, and results are expressed in Tables 1 and 2.

    toward the camera.

    ge/km/h The number of vehicle Average error (%)

    85 2.5110 1.5139 0.9302 0.8272 1.076 1.112 1.34 1.5

    away from the camera.

    ge/km/h The number of vehicle Average error (%)

    79 1.3123 1.1164 0.9295 0.8254 0.967 1.215 1.43 1.7

    vibration. Ting the adjuare backgroused in the

    In papermoving vehis proposed(IFD) and thoutputs arerithm. Thentheir speedground subalgorithm. [2].

    Table 3 pthe three mmethod prodifferent ve

    In additAs the camhigh. Also, posed methbetween twthem, and tthe vehiclesPaper [1]smethod

    Paper [2]smethod

    Our method

    IFD BS

    4.2 5.6 1.90.68 4.1 5.3 1.39.7 3.5 3.0 0.95.6 2.2 1.1 0.96.6 1.5 1.5 1.03.7 0.7 2.3 1.2

    1.9 1.2 1.41.6

    ents show that the method can measure the vehiclerate and real-time. Figs. 8 and 9 show the performanceosed method. The test results (see Tables 1 and 2) showor range of the speed measurement is relatively small.he velocity of vehicles is too fast or too small, the errorlts is large. When the vehicle is completely in ROI, thee of the method is the best.e test results (see Tables 1 and 2), they show that ther increases when the velocity is relatively large or small,use the actual ratio of the pixels in the image and thead are deviated from the ratio K.g to the proposed method, the processing time of each

    ms, which is less then 45 ms by the basic optical owso it can meet the requirement of the real-time.

    red with other VSM methods

    to show the performance of the described method, wehe result of our method with paper [1] and paper [2]ss.r [1], a novel VSM method is proposed. Firstly, the-horizontal-histogram-based method is used to com-e background of an incoming image. The methodoise coming from the displacement between an incom-and a background image that is caused by camerahen, the speed of the vehicles is measured by analyz-stment image. The principal algorithms of the methodund subtraction (BS) and Hough transform. The camera

    method is uncalibrated-camera-based camera [1]. [2], a novel speed measurement method by tracking theicles is illustrated. A novel adaptive threshold method

    to binaries the outputs from the interframe differencee background subtraction (BS) techniques. The binary

    used to detect moving vehicles by the shrinking algo-, the detected moving vehicles are tracked to estimates. The principal algorithms of the method are back-traction (BS), interframe difference (IFD) and shrinkingThe camera used in the method is a calibrated cameraresents the average error in different velocity range byethods, respectively. From the table, it shows that theposed in the paper has a satisfactory performance inlocity range.ion, the stated method can be used to count vehicles.era is xed on a bridge, and the height of bridge is notthe processing time of each frame is 25 ms by the pro-od. So, when the vehicles pass the ROI, the time intervalo vehicles in the same lane is long enough to separatehe number of the interval can be used as the number of

    passed the lane. Test results show that the detected rate

  • J. Lan et al. / Optik 125 (2014) 289 295 295

    is up to 99% by the proposed method. But when a big vehicle and asmall vehicle pass the ROI at the same time at the double-lane, andthe distance between the two vehicles is short, the detected resultmay be fault.

    4. Conclusion

    The VSM method on the base of the improved three-frame dif-ference and the proposed gray constraint optical ow algorithm hasthe characteristics of low cost, low computation and accurate speedmeasurement. In the method, the contour of moving vehicles canbe detected accurate by the improved frame difference algorithm,and the vehicle contours optical ow value, which is the speed (pix-els/s) of the vehicle in the image, is calculated by the proposed grayconstraint optical ow algorithm. In the image, we draw the regionof interest and only compute the speed of moving targets contourin the region, which can reduce the computational further. By thecorresponding ratio between the image pixels and the width of theroad, the speed (km/h) of the moving target is calculated. Exper-iments show that the method is easy to implement and has goodrobustness. Also, the speed estimate is satised. For the three-lane,four-lane and multi-lanes speed measurements, we can improvethe detection method to reduce the computation to meet the real-time requirement, and establish the accurate model between thepixels in thnext study.

    Acknowled

    This wor(Grant No. No. 410203Fundament12-017A).

    References

    [1] T.T. Nguycamera vsystem, IE

    [2] T. Celik, tem for s3216322

    [3] Y.-K. Ki, Ddetectors

    [4] H. Cheng,of a laser-the highw

    [5] T.X. Mei, H. Li, Measurement of absolute vehicle speed with a simplied inversemodel, IEEE Trans. Veh. Technol. 59 (3) (2010) 11641171.

    [6] L.-D. Damien, G. Christophe, Doppler-based ground speed sensor fusion and slipcontrol for a wheeled rover, IEEE Trans. Mech. 14 (August (4)) (2009) 484492.

    [7] A. Fernndez-Caballero, F.J. Gmez, J. Lpez-Lpez, Road-trafc monitoring byknowledge-driven static and dynamic image analysis, Expert Syst. Appl. 35(October (3)) (2008) 701719.

    [8] Y. Li, L. Yin, Y. Jia, Vehicle speed measurement based on video images, in:2008 3rd International Conference on Innovative Computing Information andControl (ICICIC), June 2008, p. 439.

    [9] R. Canals, A. Roussel, J.-L. Famechon, S. Treuillet, A biprocessor-oriented vision-based target tracking system, IEEE Trans. Ind. Electron. 49 (April (2)) (2002)500506.

    [10] R. Cucchiara, C. Grana, M. Piccardi, A. Prati, Detecting moving objects, ghosts,and shadows in video streams, IEEE Trans. Pattern Anal. Mach. Intell. 25(October (10)) (2003) 13371342.

    [11] T. Schoepin, D. Dailey, Dynamic camera calibration of roadside trafc man-agement cameras for vehicle speed estimation, IEEE Trans. Intell. Transp. Syst.4 (June (2)) (2003) 9098.

    [12] Z.-D. Zhang, C.-w. Zhang, Method to improve the accuracy of video-based vehi-cle speed measurement, J. Shanghai Jiaotong Univ. 44 (10) (2010) 14391443.

    [13] B.L. Tseng, C.-Y. Lin, J.R. Smith, Real-time video surveillance for trafc monitor-ing using virtual line analysis, in: IEEE International Conference on Multimediaand Expo, August 2002, pp. 541544.

    [14] J. Wu, Z. Yang, J. Wu, A. Liu, Virtual line group based video vehicle detectionalgorithm utilizing both luminance and chrominance, in: IEEE Conference onIndustrial Electronics and Applications, May 2007, pp. 28542858.

    [15] T.N. Schoepin, D.J. Dailey, Algorithms for calibrating roadside trafc cam-eras and estimating mean vehicle speed, in: IEEE International Conference onIntelligent Transportation Systems, 2007, pp. 277283.

    [16] M. Weng, G. Huang, X. Da, A new interframe difference algorithm for movinget decessinookinervise07) 28ong, rmaton..P. Ho203aribo

    asuremrnatio590. He, Nsecuter Vis

    Daileng unc107.. Cathed use 2005rammed froembeadur

    c intferene image and the width of the road, which will be our

    gments

    k is supported by National Natural Science Foundation61174181), Beijing Natural Science Foundation (Grant8), China Basic Research (Grant No. 2220132013) andal Research Funds for the Central Universities (FRF-SD-

    en, X.D. Pham, J.H. Song, Compensating background for noise due toibration in uncalibrated-camera-based vehicle speed measurementEE Trans. Veh. Technol. 60 (January (1)) (2011) 3043.

    H. Kusetogullari, Solar-powered automated road surveillance sys-peed violation detection, IEEE Trans. Ind. Electron. 57 (9) (2010)7..-K. Baik, Model for accurate speed measurement using double-loop, IEEE Trans. Veh. Technol. 55 (4) (2006) 10941101.

    B. Shaw, J. Palen, B. Lin, B. Chen, Z. Wang, Development and eld testbased nonintrusive detection system for identication of vehicles onay, IEEE Trans. Intell. Transp. Syst. 16 (June (2)) (2005) 147155.

    targPro

    [17] A. Lsup(20

    [18] Y. DInfotati

    [19] B.K185

    [20] G. GmeInte585

    [21] X.Cconput

    [22] D.J.usi98

    [23] F.WspeJun

    [24] L. GspeNov

    [25] C. MtrafContection, in: Proceedings of International Congress on Image and Signalg, October 2010, pp. 285289.gbill, J. Rogers, D. Lieb, J. Curry, et al., Reverse optical ow for self-d adaptive autonomous robot navigation, Int. J. Comput. Vis. 74 (3)7302.Video Motion Detection Based on Optical Flow Field, Department ofion Science and Engineering, Shandong University, 2008, M.E. Disser-

    rn, B.G. Schunck, Determining optical ow, Artif. Intell. 17 (1981).tto, P. Castello, E.D. Ninno, P. Pedrazzi, G. Zan, Speed vision: speedent by license plate reading and tracking, in: Proceedings of IEEEnal Conference on Intelligent Transportation Systems, 2001, pp.

    ..H.C. Yung, A novel algorithm for estimating vehicle speed from two

    ive images, in: Proceedings of IEEE Workshop on Applications of Com-ion, 2007, pp. 1218.y, F.W. Cathey, S. Pumrin, An algorithm to estimate mean trafc speedalibrated cameras, IEEE Trans. Intell. Transp. Syst. 1 (June (2)) (2000)

    ey, D.J. Dailey, A novel technique to dynamically measure vehicleing uncalibrated roadway cameras, in: Proc. IEEE Symp. Intell. Veh.,, pp. 777782.atikopoulos, G.E. Karras, E. Petsa, Automatic estimation of vehiclem uncalibrated video sequences, in: Proc. Int. Symp. Mod. Technol.,r 2005, pp. 332338.o, K. Batista, P. Peixoto, J. Batista, Estimation of vehicle velocity andensity using rectied images, in: Proceedings of IEEE Internationalce on Image Processing, October 2008, pp. 777780.

    Vehicle speed measurement based on gray constraint optical flow algorithm1 Introduction2 Vehicle speed measurement method2.1 Improved three-frame difference method2.1.1 Interframe difference method2.1.2 Improved three-frame difference method

    2.2 Basic equation of optical flow and the improved algorithm2.2.1 Basic equation of optical flow2.2.2 Gray constraint optical flow algorithm2.2.3 Motion constraint equations of Horn and Schunck

    2.3 Vehicle speed measurement

    3 Experimental results and analysis3.1 Vehicle speed measurement by the proposed method3.2 Compared with other VSM methods

    4 ConclusionAcknowledgmentsReferences