paper-3 reducing support vector machine complexity by implementing kalman filter

Upload: rachel-wheeler

Post on 14-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    1/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    22

    Reducing Support Vector Machine Complexity by Implementing Kalman

    Filter

    Muhsin Hassan, Rajprasad Rajkumar, Dino Isa, Nik Akram

    Department of Electrical and Electronic Engineering, Faculty of EngineeringUniversity of Nottingham Malaysia Campus

    Jalan Broga, 43500, Semenyih, Selangor

    AbstractThis paper aim is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine

    complexity. In the experiment for pipeline defect classification, Kalman Filter not only managed to improveclassification accuracy of Support Vector Machine. It also found that Kalman Filter reduced the complexity ofthe Support Vector Machine by reducing the error and therefore required less support vectors to solve theclassification problems. In this experiment, datasets is added with Additive White Gaussian Noise to study the

    performance of the Support Vector Machine in noisy environment. This classification problem is then applied onthree separate techniques, Support Vector Machine only, Discrete Wavelet Transform and Support VectorMachine combination, and lastly the proposed idea of Kalman Filter and Support Vector Machine combination.Results from the experiment shows that Kalman Filter and Support Vector Machine not only improves SupportVector Machine accuracy, it is also reduced the complexity of the Support Vector Machine compared to twoother techniques.

    Keywords: Artificial Intelligence, Discrete Wavelet Transform, Kalman Filter, Support Vector Machine.

    1.IntroductionIn pipeline fault diagnosis, Support Vector Machine (SVM) has been proposed as a technique to monitor

    and classify defect in pipeline [1]. Support Vector Machine is proven to be a better alternative than ArtificialNeural Networks (ANN) in Machine Learning studies due to Structural Risk Minimization advantage over

    Empirical Risk Minimization [2]. This advantage gives SVM a greater generalization which is the aim instatistical learning. SVM capability to solve problems with small sample set and high dimensions make it apopular choice for fault diagnosis problems [2].

    In a previous experiment, different type of defects with varying depths simulated in the laboratory has beenclassified using SVM. Datasets used in this experiment have been added with Additive White Gaussian Noise tostudy the performance of hybrid technique for SVM, namely Kalman Filter +SVM hybrid combination [3]. Inpractical scenario, datasets obtained from the field are susceptible to noise from surrounding environment. Thisnoise can greatly degrade the accuracy of SVM classification. To maintain the high level of SVM classificationaccuracy, a hybrid combination of KF and SVM has been proposed to counter this problem. Kalman Filter willbe used as a pre-processing technique to de-noise the datasets which are then classified using SVM. A popularde-noising technique used with Support Vector Machine to filter out noise, the Discrete Wavelet Transform(DWT) is included in this study as a benchmark for comparison to the KF+SVM technique. Discrete WaveletTransform is widely used as SVM noise filtering technique [4] [5]. DWT has become a tool of choice due to itstime-space frequency analysis which is particularly useful for pattern recognition [6]. KF+SVM combinationshows promising results in improving SVM accuracy. Even though Kalman Filter is not widely used for de-noising in SVM compared to DWT, it has the potential to perform as a de-noising technique for SVM. KF+SVMcombination not only improve the classification accuracy of SVM, it is also found to reduces the complexity forthe SVM by reducing number of support vectors used to solve the classification problem. Some researchersconsider SVM to be a slower process compared to other approaches with similar generalization performance [7].Some works have been done to rectify this problem by reduce the computational time taken for SVM byreducing the number of support vectors used [7] [8] [9] [10]. It is also discovered that reducing a number ofsupport vectors can reduce computational time, complexity, and reduce over-fitting problems. This paper showpotential of Kalman Filter +Support Vector Machine combination to improve SVM accuracy and at the sametime reduce the number of support vectors used which improve training time, reduce complexity and over- fittingphenomenon.

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    2/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    23

    2. L iterature Review

    2.1 Pipeline

    Corrosion is a major problem in offshore oil and gas pipelines and can result in catastrophic pollution

    and wastage of raw material [11]. Frequent leaks of gas and oil due to ruptured pipes around the world arecalling for the need for better and more efficient methods of monitoring pipelines [12]. Currently, pipelineinspection is done at predetermined intervals using techniques such as pigging [13]. Other non-destructivetesting techniques are also done at predetermined intervals where operators must be physically present toperform measurements and make judgments on the integrity of the pipes. The condition of the pipe betweenthese testing periods, which can be for several months, can go unmonitored. The use of a continuous monitoringsystem is needed.

    Non Destructive testing (NDT) techniques using ultrasonic sensors are ideal for monitoring pipelines asit doesnt interrupt media flow and can give precise information on the condition of the pipe wall. Long rangeultrasonic testing (LRUT) utilizes guided waves to inspect long distances from a single location [14]. LRUT wasspecifically designed for inspection of corrosion under insulation (CUI) and has many advantages over otherNDT techniques which have seen its widespread use in many other applications [15]. It is also able to detect both

    internal and external corrosion which makes it a more efficient and cost-saving alternative. With the recentdevelopments in permanent mounting system using a special compound, the ability to perform a continuousmonitoring system has now become a reality [16].

    An LRUT system was develop in the laboratory for 6 inch diameter pipes using a ring of 8 piezo-electric transducers [17]. Signals were acquired from the pipe using a high speed data acquisitions system. Thedeveloped LRUT system was tested out using a section of a carbon steel pipe which is 140mm in diameter and5mm thick. A 1.5m pipe section was cut out and various defects were simulated as shown in Table 1. A fullcircumferential defect with 3mm axial length was created using a lathe machine. Depths of 1mm, 2mm, 3mmand 4mm were created for this defect and at each stage tested using the LRUT system.

    2.2 Support Vector Machine

    Guided wave signals have been used for many researchers by utilizing different signal processing

    techniques as a means of identifying different types and depths of defects. Advanced signal processingtechniques such as neural networks have also been used to quantify and classify defects from the guided wavesignals [18] [19]. Since neural networks are a supervised learning algorithm, the data required for its trainingphase from are obtained from simulation methods. Simulation is performed by modeling the damage based onreflection coefficients or finite elements [20]. The trained neural network model is then tested from data obtainedexperimentally and have shown to obtain very good accuracy in classifying defects in pipes, bars and plates [21].

    Support vector machines, founded by V. Vapnik, is increasingly being used for classification problemsdue to its promising empirical performance and excellent generalization ability for small sample sizes with highdimensions. The SVM formulation uses the Structural Risk Minimization (SRM) principle, which has beenshown to be superior, to traditional Empirical Risk Minimization (ERM) principle, used by conventional neuralnetworks. SRM minimizes an upper bound on the expected risk, while ERM minimizes the error on the trainingdata. It is this difference which equips SVM with a greater ability to generalize [22].

    Given a set of independent and identically distributed (iid) training samples, S={(x1, y1), (x2,y2),..(xn,yn)}, where xiRN and yi{-1, 1} denotes the input and the output of the classification, SVMfunctions by creating a hyperplane that separates the dataset into two classes. According to the SRM principle,there will just be one optimal hyperplane, which has the maximum distance (called maximum margin) to theclosest data points of each class as shown in Fig. 1. These points, closest to the optimal hyperplane, are calledSupport Vectors (SV). The hyperplane is defined by the equation w.x +b =0 (1), and therefore the maximalmargin can be found by minimizing ||w||2(2) [22].

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    3/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    24

    Figure 1: Optimal Hyperplane and maximum margin for a two class data [23].

    The Optimal Separating Hyperplane can thus be found by minimizing (2) under the constraint (3) thatthe training data is correctly separated [24].

    yi.(xi.w +b) 1 , i (3)

    The concept of the Optimal Separating Hyperplane can be generalized for the non-separable case byintroducing a cost for violating the separation constraints (3). This can be done by introducing positive slackvariablesi in constraints (3), which then becomes,

    yi.(xi.w+b) 1 - i , i (4)

    If an error occurs, the correspondingi must exceed unity, soii is an upper bound for the number ofclassification errors. Hence a logical way to assign an extra cost for errors is to change the objective function (2)to be minimized into:

    min { ||w|| +C. (ii ) } (5)

    where C is a chosen parameter. A larger C corresponds to assigning a higher penalty to classificationerrors. Minimizing (5) under constraint (4) gives the Generalized Optimal Separating Hyperplane.This is aQuadratic Programming (QP) problem which can be solved here using the method of Lagrange multipliers [25].

    After performing the required calculations [22, 24], the QP problem can be solved by finding theLaGrange multipliers, i, that maximizes the objective function in (6),

    ( )jTi1,1

    xx2

    1)( jij

    n

    jii

    n

    ii yyW

    ==

    = (6)

    subject to the constraints,

    ,0 Ci i=1,..,n, and = =n

    i iiy

    1.0 (7)

    The new objective function is in terms of the Lagrange multipliers, i only. It is known as the dualproblem: if we know w, we know all i. if we know all i, we know w.Many of the i are zero and so w is alinear combination of a small number of data points. xi with non-zero i are called the support vectors [26]. Thedecision boundary is determined only by the SV. Lettj (j=1, ..., s) be the indices of the ssupport vectors. We canwrite,

    jjj t

    s

    jtt xyw

    =

    =1

    (8)

    So far we used a linear separating decision surface. In the case where decision function is not a linearfunction of the data, the data will be mapped from the input space (i.e. space in which the data lives) into a highdimensional space (feature space) through a non-linear transformation function ( ). In this (high dimensional)feature space, the (Generalized) Optimal Separating Hyperplane is constructed. This is illustrated on Fig. 2 [27].

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    4/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    25

    Figure 2: Mapping onto higher dimensional feature space

    By introducing the kernel function,

    ( ) ( ) ( ) ,,, jiji xxxxK = (9)

    it is not necessary to explicitly know ( ). So that the optimization problem (6) can be translated

    directly to the more general kernel version [23],

    ( )jijijn

    jii

    n

    ii xxKyyW ,

    2

    1)(

    1,11

    ===

    = , (10)

    subject to =

    =n

    iiii yC

    1

    0,0 .

    After the i variables are calculated, the equation of the hyperplane,d(x) is determined by,

    b),K(yxdl

    i

    ii +==

    i

    1

    xx)( (11)

    The equation for the indicator function, used to classify test data (from sensors) is given below where thenew dataz is classified as class 1 ifi>0, and as class 2 ifi

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    5/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    26

    G. Welch and G. Bishop [34] define Kalman Filter as set of mathematical equations that provides anefficient computational (recursive) means to estimate the state of a process, in a way that minimizes the mean ofthe squared error. According to Grewal and Andrews [35], Theoretically Kalman Filter is an estimator forwhat is called the linear-quadratic problem, which is the problem of estimating the instantaneous state of alinear dynamic system perturbed by white noise by using measurements linearly related to the state but

    corrupted by white noise. The resulting estimator is statistically optimal with respect to any quadratic function ofestimation error. In layman term, To control a dynamic system, it is not always possible or desirable tomeasure every variable that you want to control, and the Kalman Filter provides a means for inferring themissing information from indirect (and noisy) measurements. The Kalman filter is also used for predicting thelikely future courses of dynamic system that people are not likely to control [35].

    Equation for a simple Kalman Filter given below [36]:

    For a linear system and process model from time k to time k+1 is describe as:

    )13(1 kkkk wGuFxx ++=+

    where xk, xk+1are the system state (vector) at time k, k+1. F is the system transition matrix, G is the gainof control uk, and wk is the zero-mean Gaussian process noise, ( )QNwk ,0~

    Huang suggested that for state estimation problem where the true system state is not available and needsto be estimated [34]. The initial xo is assumed to follow a known Gaussian distribution ( )ooo PxNx ,~

    . Theobjective is to estimate the state at each time step by the process model and the observations. The observationmodel at time k+1 is given by.

    )14(111 +++ += kkk vHxz Where H is the observation matrix and vk+1is the zero-mean Gaussian observation noise vk+1~N (0, R).

    Suppose the knowledge on xk at time k is

    ( ) )15(,~ kkk PxNx

    Then xk+1at time k+1 follow

    ( ) )16(,~ 111 +++ kkk PxNx

    where , Pk+1can be computed by the following Kalman filter formula.

    Predict using process model:

    )17(1 kkk GuxFx +=+

    )18(1 QFFPP

    Tkk +=+

    Update using observation:

    ( ) )19(1111 ++++ += kkkk xHZKxx

    )20(11

    Tkk KSKPP += ++

    Where the innovation covariance S (here is called innovation) and the Kalman gain, K are givenby

    )21(1 RHPHST

    k += +

    )22(11

    += SHPKT

    k

    2.4 Discrete Wavelet Transfrom (DWT)

    A discrete wavelet transform (DWT) is basically a wavelet transform for which the wavelets aresampled in discrete time. The DWT of a signal x is calculated by passing it through a series of filters. First thesamples are passed through a low pass filter with impulse responseg, resulting in aconvolutionof the two (23).The signal is then decomposed simultaneously using a high-pass filter h (24).

    [ ] [ ] [ ]

    =

    =k

    low kngkxny 2 (23)

    http://en.wikipedia.org/wiki/Impulse_responsehttp://en.wikipedia.org/wiki/Convolutionhttp://en.wikipedia.org/wiki/Convolutionhttp://en.wikipedia.org/wiki/Impulse_response
  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    6/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    27

    [ ] [ ] [ ]

    =

    =k

    high knhkxny 2 (24)

    The outputs of equations (23) and (24) give the detail coefficients (from the high-pass filter) andapproximation coefficients (from the low-pass). It is important that the two filters are related to each other forefficient computation and they are known as aquadrature mirror filter [37].

    However, since half the frequencies of the signal have now been removed, half the samples can bediscarded according to Nyquists rule. The filter outputs are then down sampledby 2 as illustrated in Figure. 3.This decomposition has halved the time resolution since only half of each filter output characterizes the signal.However, each output has half the frequency band of the input so the frequency resolution has been doubled. Thecoefficients are used as inputs to the SVM [38].

    Figure 3: DWT filter decomposition

    .

    3. MethodologyIn this experiment, we used the same datasets provided in previous experiment [3]. Additive White

    Gaussian Noise added to the datasets makes it contaminated with noise to study the effect of noise on SVMclassification accuracy.

    Figure 4: Flowchart of current works to obtained comparison of SVM, DWT+SVM and KF+SVMaccuracy

    Using the flowchart above, it is a more reliable comparison between SVM, DWT+SVM and KF+SVMto use the same contaminated data.

    1. For SVM technique, the data runs through SVM to see the performance of SVM and this will be useas benchmark.

    http://en.wikipedia.org/wiki/Quadrature_mirror_filterhttp://en.wikipedia.org/wiki/Downsamplinghttp://en.wikipedia.org/wiki/Downsamplinghttp://en.wikipedia.org/wiki/Quadrature_mirror_filter
  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    7/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    28

    2. For DWT+SVM, the data is first filter using DWT techniques before the filtered-data used as inputfor SVM.

    3. The same goes for KF+SVM techniques whereby the data is first filter using Kalman Filter in thiscase and the filtered-data used as input for SVM.

    The basic technique of SVM classification is sufficient to apply on classifying pipeline defects,however the problem occur when several tests which include AWGN into the data to mimics noise in realapplication show that noise can greatly affect the SVM classification accuracy. In MATLAB tests, AWGN valuesused are from 5dB to -10dB. In this proposed solution, additive white Gaussian noise (AWGN) has been addedusing function in MATLAB. AWGN is widely use to mimics noise because of its simplicity and traceablemathematical models which are useful to understand system behavior [39][40]. SVM classification accuracy andnumber of support vectors used for each technique are recorded and discuss in chapter below.

    4. Discussion and ResultsFigures and table below show results of number of support vector used in SVM, DWT+SVM and

    KF+SVM for pipeline dataset 36, 56 and 76. KF+SVM not only show a better classification accuracy resultscompared to SVM or DWT+SVM. KF+SVM technique also used less number of support vector compared to the

    others techniques. Hence it shows KF+SVM technique can reduce the complexity of the SVM classification.For datasets 36, KF+SVM show a better classification accuracy results. For support vectors reduction, it

    only occurred at AWGN 3,-1,-3, and -5dB with average of 5% reduction in support vectors number.

    In datasets 56, again KF+SVM shows a superior classification results while DWT+SVM is not able toimprove SVM classification accuracy. For this datasets also, it shows DWT+SVM not able to reduce the numberof support vectors used. On contrary, KF+SVM managed to reduce 21.49% - 30.33% number of support vectors.Only at -10dB KF+SVM not managed to decrease number of support vectors significantly. However one shouldtake note that classification accuracy of KF+SVM is still improved in all noise level compared to SVM andDWT+SVM.

    Datasets 76 results show KF+SVM improve SVM classification accuracy and at the same timesignificantly reduced the number of support vectors used compared to other techniques by 21.76%-38.14%.

    Again at -10dB, there is not much reduction in number of support vectors used.

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    8/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    29

    Table 1: Number of Support Vectors used for each techniques:

    Dataset

    Additive white Gaussian noise (dB)

    5 3 1 -1 -3 -5 -10

    36:SVM1404 1999 1660 2101 2165 2211 2225

    36:DWT+SVM1949 1999 2054 2101 2165 2221 2225

    36:K F+SVM1878 1910 1947 1981 2041 2105 2225

    Percentage ofreduction

    -33.76% 4.45% -17.29% 5.71% 5.73% 4.79% 0.00%

    56:SVM1152 1309 1520 1754 1954 2141 2225

    56:DWT+SVM1152 1309 1520 1754 1954 2157 2225

    56:K F+SVM844 947 1075 1222 1448 1681 2209

    Percentage ofreduction

    26.74% 27.65% 29.28% 30.33% 25.90% 21.49% 0.72%

    76:SVM1012 1212 1476 1787 2054 2206 2225

    76:DWT+SVM856 1212 1472 1824 2054 2206 2225

    76:K F+SVM626 744 851 1121 1423 1726 2224

    Percentage ofreduction

    38.14% 38.61% 42.34% 37.27% 30.72% 21.76% 0.04%

    Figure 5: Comparison results of classification accuracy between SVM, DWT+SVM and KF+SVM fordataset 36

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    9/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    30

    Figure 6: Comparison results of number support vectors used between SVM, DWT+SVM and KF+SVMfor dataset 76

    Figure 7: Comparison results of classification accuracy between SVM, DWT+SVM and KF+SVM for

    dataset 56

    Figure 8: Comparison results of number support vectors used between SVM, DWT+SVM and KF+SVMfor dataset 76

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    10/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    31

    Figure 9: Comparison results of classification accuracy between SVM, DWT+SVM and KF+SVM fordataset 76

    Figure 10: Comparison results of number support vectors used between SVM, DWT+SVM and KF+SVMfor dataset 76

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    11/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    32

    5. ConclusionKalman Filter and Support Vector Machine combination not only helps in improving Support Vector

    Machine classification accuracy. It is also proven help to reduce number of support vectors used in SVM which

    help to reduce computational time and reduce over-fitting problems. In classification accuracy, it improves SVMaccuracy by 10%. While DWT+SVM technique, struggle to improve SVM accuracy in noisy condition. Inreduction of support vectors, it help to reduce around 20-38% of support vector used to solve the sameclassification problem compare to DWT+SVM or SVM only technique (except for -10dB AWGN). Hence in thisexperiment, it show Kalman filter and Support Vector Machine have high potential in solving defectclassification for pipeline defect monitoring. In next experiment, we would like to study the performance of thistechnique in several other domains such as landslides and flood monitoring and prediction.

    ACKNOWLEDGEMENTSI would like to thank, Ministry Of Science, Technology and Information (MOSTI) Malaysia for

    sponsoring my PhD. I also like to extend my gratitude to Prof. Dino Isa and Dr. Rajprasad for providing thepipeline datasets for the MATLAB simulations and Nik Akram for his help in this project.

    REFERENCES[1] D. Isa, R. Rajkumar, KC Woo, Pipeline Defect Detection Using Support Vector Machine, 6th WSEAS

    International conference on circuits,systems, electronics, control and signal processing, Cairo, Egypy, Dec2007. pp.162-168.

    [2] Ming Ge, R. Du, Cuicai Zhang, Yangsheng Xu, Fault diagnosis using support vector machine with anapplication in sheet metal stamping operations. Mechanical Systems and Signal Processing 18, 2004, pp.143-159.

    [3] M. Hassan, R. Rajkumar, D. Isa, and R. Arelhi Kalman Filter as a Pre-processing technique to improveSupport Vector Machine, Sustainable Utilization and Development in Engineering and Technology(STUDENT), 2011, Oct 2011, DOI: 10.1109/STUDENT.2011.6089335

    [4] Rajpoot, K.M, and Rajpoot, N.M, Wavelets and support vector machines for texture classification,Multitopic Conference, 2004. Proceeding of INMIC 2004, 8th International, pp.328-333.

    [5] Lee, Kyungmi and Estivill-Castro, Vladamir, Support vector machine classification of ultrasonic shaftinspection data using discrete wavelet transform, Proceddings of the International Conference on MachineLearning, MLMTA04, 2004,

    [6] Sandeep Chaplot, L.M. Patnaik, and N.R. J agannathan, Classification of magnetic resonance brain imagesusing wavelets as input to support vector machine and neural network, Biomedical Signal ProcessingControl 1, (2006), pp.86-92.

    [7] J .C Burges, Simplified Support Vector Decision Rules, In Proceeding of Machine Learning, Proceedingsof the Thirteenth International Conference (ICML 1996), Bari, Italy, July 206, 1996.

    [8] J . Fortuna and D. Capson, Improved support vector classification using PCA and ICA feature spacemodification, Pattern Recognition 37, (2004), pp.1117-1129.

    [9] G. Lebrun, C.Charrier, and H. Cardot SVM training time reduction using vector quantization, PatternRecognition, 2004. ICPR 2004, Proceedings of the 17th International Conference on Pattern Recognition,Vol.1,pp.160-163.

    [10] Building Support Vector Machines with Reduced Classifier Complexity, Journal of Machine LearningResearch 7, (2006), pp. 1493-1515.

    [11] Lozev, M. G., Smith, R. W. and Grimmett, B. B., Evaluation of Methods for Detecting and Monitoring ofCorrosion Damage in Risers, Journal of Pressure Vessel Technology. Transactions of the ASME, Vol. 127,2005.

    [12] Anonymous (2009, April 1)Gazprom reduces gas supplies by 40% to Balkans after pipeline rupture,[Online]. Available:www.chinaview.cn

    [13] Lebsack, S.,Non-invasive inspection method for unpiggable pipeline sections, Pipeline & Gas Journal, 59,2002,

    [14] Demma A., Cawley P., Lowe M., Roosenbrand A. G., Pavlakovic B., The reflection of guided waves fromnotches in pipes: A guide for interpreting corrosion measurements, NDT & E International. Vol. 37, Issue3, 167-180. Elsevier, 2004.

    [15] Wassink and Engel, Applus RTD Rotterdam, The Netherland, (2008, April 8), Comparison of Guidedwaves inspection and alternative strategies for inspection of insulated pipelines, Guided Wave Seminar,Italian Association of Non-destructive Monitoring and diagnostics(AIPnD).[Online]Available:www.aipnd.it/onde-guidate/GWseminar.htm

    [16] Anonymous Teletest Focus Goes Permanent, Plan Integrity Limited,[Online] Available:www.plantintegrity.co.uk

  • 7/30/2019 Paper-3 Reducing Support Vector Machine Complexity by Implementing Kalman Filter

    12/12

    International J ournal of Computational Intelligence and Information Security, April 2013, Vol. 4 No. 4,ISSN: 1837-7823

    33

    [17] Dino Isa, Rajprasad Rajkumar, Pipeline defect prediction using long range ultrasonic testing and intelligentprocessing Malaysian International NDT Conference and Exhibition, MINDTCE 09, Kuala Lumpur, July2009.

    [18] Rizzo, P., Bartoli, I., Marzani, A. and Lanza di Scalea, F.,Defect Classification in Pipes by NeuralNetworks Using Multiple Guided Ultrasonic Wave Features Extracted After Wavelet Processing, ASMEJournal of Pressure Vessel Technology. Special Issue on NDE of Pipeline Structures, 127(3), 2005, pp. 294-

    303.[19] Cau F., Fanni A., Montisci A., Testoni P. and Usai M., Artificial Neural Network for Non-Destructiveevaluation with Ultrasonic waves in not Accessible Pipes, In: Industry Applications Conference, 2005.Fortieth IAS Annual Meeting, Vol. 1, 2005, 685-692.

    [20] Wang, C.H., Rose, L.R.F., Wave reflection and transmission in beams containing delamination andinhomogeneity. J . Sound Vib. 264, 2003, pp.851872.

    [21] K.L. Chin and M. Veidt, Pattern recognition of guided waves for damage evaluation in bars, PatternRecognition Letters 30 , 2009, pp.321330.

    [22] V. Vapnik. The Nature of Statistical Learning Theory, 2nd edition, Springer, 1999.[23] Klaus-Robert Mller, Sebastian Mika, Gunnar Rtsch, Koji Tsuda, and Bernhard Schlkopf, An

    Introduction To Kernel-Based Learning Algorithms, IEEE Transactions On Neural Networks, Vol. 12, No.2, March 2001 181

    [24] Burges, Christopher J .C. A Tutorial on Support Vector Machines for Pattern Recognition.BellLaboratories, Lucent Technologies. Data Mining and Knowledge Discovery, 1998.

    [25] Haykin, Simon. Neural Networks. A Comprehensive Foundation, 2nd Edition, Prentice Hall, 1999

    [26] Gunn, Steve. Support Vector Machines for Classification and Regression, University of SouthamptonISIS, 1998, [Online] Available: http://www.ecs.soton.ac.uk/~srg/publications/pdf/SVM.pdf

    [27] Law, Martin. A simple Introduction to Support Vector Machines, Lecture for CSE 802, Department ofComputer Science and Engineering, Michigan State University, [Online] Available:www.cse.msu.edu/~lawhiu/intro_SVM_new.ppt

    [28] Kecman, Vojislav, Support Vector machines Basics, School Of Engineering, Report 616, The Universityof Auckland, 2004, [Online] Available:http://genome.tugraz.at/MedicalInformatics2/Intro_to_SVM_Report_6_16_V_Kecman.pdf

    [29] Sweeney, N., Air-to-air missile vector scoring, Control Applicationcs (CCA), IEEE Control SystemsSociety, 2011 Multi-Conference on Systems and Controls, 2011, pp. 579-586.

    [30] Gipson, J., 2012, Air-to-air missile enhanced scoring with Kalman smoothing, Master Thesis, Air ForceInstitute of Technlogy, Wright-Patterson Air Force Base, Ohio.

    [31] Gaylor, D.E, 2003, Integrated GPS/INS Navigation System Design for Autoomous SpacecraftRendezvous, Phd Thesis, The University of Tezas, Austin.

    [32] Huang, Shian-Chang, Combining Extended Kalman Filters and Support Vector Machines for Online PriceForecasting, Advances in Intteligent System Research, Proceedings of the 9th Joint Conference onInformation Sciences (JCIS), 2006.

    [33] Lucie Daubigney and Oliver Pietquin, Single-trial P300 detection with Kalman Filtering and SVMs,ESANN 2011 Proceeding, European Symposium on Artificial Neural Networks, Computational Intelligenceand Machine Learning, 2011.

    [34] Welch G. and Bishop G., 2006, An introduction to the Kalman Filter, Department of Computer Science,University of North Carolina, viewed [Online]Available\url{http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.79.6578&rep=rep1&type=pdf}

    [35] Grewal M.S. and Andrews A.P., 2001, Kalman Filtering: Theory and Practice Using MATLAB, 2ndEdition, A Wiley-Interscience Publication, New York.

    [36] Shoudong Huang, Understanding Extended Kalman Filter Part II: Multidimensional Kalman FilterMarch 2009, [Online] Available: http://services.eng.uts.edu.au/~sdhuang/Kalman%20Filter_Shoudong.pdf

    [37] Mallat, S. A Wavelet Tour of Signal Processing, 1999, Elsevier.[38] Lee, K. and Estivill-Castro, V., Classification of ultrasonic shaft inspection data using discrete wavelet

    transform. In: Proceedings of the Third IASTED International Conferences on Artificial Intelligence andapplication, ACTA Press. pp. 673-678[39] Lin C.J , Hong S.J , and Lee C.Y, Using least squares support vector machines for adaptive communication

    channel equalization, International Journal of Applied Science and Engineering, 2005, Vol.3, pp.51-59.[40] Sebald, D.J .and Bucklew, J.A., Support Vector Machine Techniques for Nonlinear Equalization, IEEE

    Transaction of Signal Processing, Vol. 48, No.11, 2000, pp.3217-3226.