research article a novel classification approach...
TRANSCRIPT
Research ArticleA Novel Classification Approach through Integration of RoughSets and Back-Propagation Neural Network
Lei Si1 Xin-hua Liu12 Chao Tan1 and Zhong-bin Wang1
1 School of Mechatronic Engineering China University of Mining amp Technology Xuzhou 221116 China2 Xuyi Mine Equipment and Materials RampD Center China University of Mining amp Technology Huairsquoan 211700 China
Correspondence should be addressed to Zhong-bin Wang wangzbpaper126com
Received 13 September 2013 Revised 12 March 2014 Accepted 6 April 2014 Published 24 April 2014
Academic Editor Olabisi Falowo
Copyright copy 2014 Lei Si et al This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Classification is an important theme in data mining Rough sets and neural networks are the most common techniques appliedin data mining problems In order to extract useful knowledge and classify ambiguous patterns effectively this paper presenteda hybrid algorithm based on the integration of rough sets and BP neural network to construct a novel classification system Theattribution values were discretized through PSO algorithm firstly to establish a decision table The attribution reduction algorithmand rules extraction method based on rough sets were proposed and the flowchart of proposed approach was designed Finally aprototype system was developed and some simulation examples were carried out Simulation results indicated that the proposedapproach was feasible and accurate and was outperforming others
1 Introduction
Across a wide variety of fields data are being accumulatedat a dramatic pace especially in the age of internet [1 2]There is much useful information hidden in the accumulatedvoluminous data but it is very hard for us to obtain it Thusa new generation of computational tool is needed to assisthumans in extracting knowledge and classifying the rapidlygrowing digital data otherwise these data are useless for us
Neural networks are applied in several engineering fieldssuch as classification problems and pattern recognitionArtificial neural network in its most general form attemptsto produce systems that work in a similar way to biologicalnervous systems It has the ability to simulate human thinkingand owns powerful function and incomparable superiority interms of establishing nonlinear and experiential knowledgesimulation models [3ndash6] Back-propagation neural network(BP neural network for short) is the core of feed-forward net-works and embodies the essence of artificial neural networkHowever conventional BP algorithm has slow convergencerate weak fault tolerance and nonunique results Althoughmany improved strategies have been proposed such asadditional momentum method adaptive learning method
elastic BP method and conjugate gradient method [7ndash9]the problems above have still not been solved completelyespecially when BP neural networks are applied in multidi-mensional and uncertain fields Therefore newer techniquesmust be coupledwith neural networks to createmore efficientand complex intelligent systems [10]
In an effort to model vagueness rough sets theory wasproposed by Pawlak in 1982 [11 12] Its outstanding featureis not to need the specific relation description about somecharacteristics or attributes but to determine the approxi-mate region of existing problems and find out the inherentrules through the indiscernibility relations and indiscerni-bility classes This theory has been successfully applied inmany fields such as machine learning data mining dataanalysis and expert systems [13 14] In the paper rough setstheory and BP neural network are presented as an integratedmethod because they can discover patterns in ambiguousand imperfect data and provide tools for data analysis Thedecision table is established reasonably through discretizingattribution values The features of decision table are analyzedusing rough sets and subsequently a classification modelbased on these features is built through integration of roughsets and BP neural network Actually this model uses the
Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2014 Article ID 797432 11 pageshttpdxdoiorg1011552014797432
2 Journal of Applied Mathematics
decision rules which are extracted by the algorithm based onvalue reduction for classification problems
The paper is structured as follows Some related worksare outlined based on literature in Section 2 The basics ofrough sets theory and BP neural network are presented inSection 3 The framework and key algorithms are proposedand the flowchart of the proposed approach is designed inSection 4 A simulation example and some comparisons areput forward to validate the proposed approach in Section 5Our conclusions are summarized in Section 6
2 Literature Review
In recent years most of classifier architectures are mainlyconstructed based on artificial intelligence (AI) algorithmswith the unceasing development and improvement of AItheory In this section we try to summarize some recentliteratures which are relevant to the construction methodsof classification system Hassan et al constructed a classifierbased on neural networks and applied rough sets theoryin attributes reduction to preprocess the training set ofneural networks [15] Berardi et al proposed a principledapproach to build and evaluate neural network classificationmodels for the implementation of decision support system(DSS) and verified the optimization speed and classifiedsamples accuracy [16] In [17 18] the architecture of neuralnetworkwith fuzzy input was proposed to effectively improvethe classification performance of neural network Hu et alpresented a self-organized feature mapping neural networkto reasonably determine the input parameters in pavementstructure design [19] In [20 21] wavelet neural networkswere applied in the construction of classification system andbetter prediction results were obtained In [22 23] a classifierin which improved particle warm algorithm was used tooptimize the parameters of wavelet neural network wasestablished and the application in nonlinear identificationproblems demonstrated its strong generalization capabilitySengur et al described the usage of wavelet packet neuralnetworks for texture classification problem and provided awavelet packet feature extractor and a multilayer perceptronclassifier [24] Hassan presented a novel classifier architecturebased on rough sets and dynamic scaling in connection withwavelet neural networks and the effectiveness was verified bythe experiments [25]
Although many approaches for the establishment ofclassification systems have been presented in above litera-tures they have some common disadvantages summarizedas follows On one hand the classification effect of signalneural network cannot be guaranteed when the networksare applied in multidimensional and uncertain fields On theother hand the combinationmethod of rough sets and neuralnetworks can only deal with classification and recognitionproblems that possess discretized dataset and cannot processthe problems that contain continuous dataset
In this paper a novel classification system based onthe integration of rough sets and BP neural network isproposedThe attribution values are discretized through PSOand the decision table is constructed The proposed model
takes account of attributes reduction by discarding redundantattributes and a rule set is generated from the decision tableA simulation example and comparisons with other methodsare carried out and the proposed approach is proved feasibleand outperforming others
3 Basic Theory
31 Rough Sets Theory Fundamental to rough sets theoryis the idea of an information system IS = (119880 119862) which isessentially a finite data table consisting of different columnslabeled by attributes and rows labeled by objects of interestand entries of the table are attributersquos values 119880 is nonemptyfinite set of objects called a universe and C is a nonemptyfinite where each 119888 isin 119862 is called condition attribute InIS = (119880 119862) every119861 sube 119862 generates an indiscernibility relationInd(119861) on 119880 which can be defined as follows
Ind (119861) = (119909 119910) isin 119880 times 119880 119887 (119909) = 119887 (119910) forall119887 isin 119861 (1)
119880Ind(119861) is a partition of 119880 by 119861 For all 119909 isin 119880 theequivalence class of 119909 in relation 119880Ind(119861) can be defined asfollows
[119909]Ind(119861) = 119909 isin 119880 119887 (119910) = 119887 (119909) forall119887 isin 119861 (2)
According to 119868 two crisp sets 119861119883 and119861119883 called the lowerand the upper approximation of the set of objects 119883 can bedefined as follows
119861119883 = 119909 isin 119880 [119909]Ind(119861) sube 119880
119861119883 = 119909 isin 119880 [119909]Ind(119861) cap 119883 =Φ
(3)
where [119909]Ind(119861) denotes the set of all equivalence classes ofInd(119861) 119861119883 is the set of all objects from 119880 which can becertainly classified as elements of 119883 employing the set ofattributes 119861 119861119883 is the set of objects of 119880 which can bepossibly classified as elements of119883 using the set of attributes119861 [25]
Decision table 119863119879 = (119880 119862 cup 119863) is a special form of aninformation system and themajor feature is the distinguishedattribute set 119863 where 119862 cap 119863 = Φ is called the deci-sion attribute Generally speaking there is a certain degreeof dependency between condition attribute and decisionattribute which can be defined as follows
120574119862 (119863) =POS119862 (119863)
|119880|
=
10038161003816100381610038161003816
⋃119883isin119880119863CX10038161003816100381610038161003816
|119880|
(4)
where POS119862(119863) is referred to the 119862-positive region of119863Due to the relevance between condition attribute and
decision attribute not all condition attributes are necessaryfor decision attribute so as to introduce the attribute reduc-tionwhich is a smaller set of attributes that can classify objectswith the same discriminating capability as that of the originalset As well known the reduct is not the only one [26 27]
When we want to determine the reduct of decision tablethe significance degree of attributes is commonly used andcan be defined as follows
sig (120572 119861 119863) = 120574120572cup119861 (119863) minus 120574119861 (119863) (5)
Journal of Applied Mathematics 3
Input layer Hidden layer Output layer
x1
x2
xs
H1
H2
Hl
o1
o2
om
120596ij 120596jk
Figure 1 The topology structure of BP-NN
where sig (120572 119861119863)denotes the significance degree of attribute120572 to attribute 119861 relative to decision attribute119863
For all 119877 isin 119862 if POS119862minus119877(119863) = POS119862(119863) then 119877 in 119862
is superfluous for 119863 Otherwise 119877 in 119862 is indispensable for119863 The set composed of all attributes that are indispensablefor119863is called the core of119862 relative to119863 namely relative coreCORE119863(119862) This relative core cannot be removed from thesystem without a loss in the knowledge that can be derivedfrom it
When determined the relative core of decision tablemustbe firstly judged whether it is the reduct of 119862 relative to 119863namely relative reduct The judgment rules are shown asfollows
If the nonempty subset 119875 of attribute set 119862 satisfiesthe conditions (1) POS119875(119863) = POS119862(119863) (2) forall119877 sube 119875POS119877(119863) =POS119862(119863) then 119875 is called a reduct of 119862 relativeto119863 denoted as RED119863(119862)
In addition decision rules can be perceived as data pat-terns which represent the relationship between the attributesvalues of a classification system [11 12] The form of thedecision rules can be shown as IF 120578 THEN 120575 where 120578 isthe conditional part of a rule set and it is a conjunction ofselectors 120575 is the decision part of attribute 119889 (119889 isin 119863) and itusually describes the class of decision attribution
32 BP Neural Network BP neural network (BP-NN) whichis surely a back-propagating neural network belongs to theclass of networks whose learning is supervised and thelearning rules are provided by the training set to describe thenetwork behavior The topology structure of BP-NN can beshown in Figure 1
In Figure 1 input vector 119883 = 1199091 1199092 119909119904 is furnishedby the condition attribution values of reduced decision tableand output vector 119874 = 1199001 1199002 119900119898 is the prediction classof decision attribution The output of hidden layer can becalculated as follows
119867119895 = 119891(
119904
sum
119894=1
120596119894119895119909119894 minus 119886119895) 119895 = 1 2 119897 (6)
where120596119894119895 is the connection weight between input and hiddenlayers 119897 is the number of hidden layer nodes 119886119895 is thethreshold of hidden layer node 119895 119891 is the activation function
of hidden layer and can be chosen as linear function orsigmoid function 119891(119909) = 1(1 + 119890
minus119909)
The output of output layer can be calculated as follows
119900119896 =
119897
sum
119895=1
119867119895120596119895119896 minus 119887119896 119896 = 1 2 119898 (7)
where 120596119895119896 is the connection weight between hidden andoutput layers 119898 is the number of output layer nodes 119887119896 isthe threshold of output layer node 119896
The training of network parameters 120596119894119895 120596119895119896 119886119895 and 119887119896
would depend on the error value which can be calculated bythe following equation
119890119896 = 119884119896 minus 119900119896 119896 = 1 2 119898 (8)
where 119900119896 and 119884119896 are current and desired output values of thenetwork respectivelyTheweight values and thresholds of thenetwork can be updated as follows
120596119894119895 (119905 + 1) = 120596119894119895 (119905) + 120577 (119905)119867119895 (1 minus 119867119895) 119909119894
119897
sum
119895=1
119890119896120596119895119896
120596119895119896 (119905 + 1) = 120596119895119896 (119905) + 120577 (119905)119867119895119890119896
119886119895 (119905 + 1) = 119886119895 (119905) + 120577 (119905)119867119895 (1 minus 119867119895) 119909119894
119897
sum
119895=1
119890119896120596119895119896
119887119896 (119905 + 1) = 119887119896 (119905) + 119890119896
(9)
where 119905 is the current iteration times 120577(119905) is the learning rateand the range is the interval of [0 1]
The learning rate has great influence on the generalizationeffect of neural network Larger learning rate would signifi-cantly modify the parameters of weight values and thresholdsand increase the learning speed But overlarge learning ratewould produce larger fluctuations in the learning process andexcessively small learning ratewould generate slower networkconvergence and stabilize the weight values and thresholdsdifficultlyTherefore this paper presents the variable learningrate algorithm to solve the above problems which can bedescribed as follows
120577 (119905) = 120577max minus119905 (120577max minus 120577min)
119905max (10)
where 120577max and 120577min are the maximum and minimum oflearning rate 119905max is the number of maximum iterations
4 The Proposed Approach
This section tries to present a new approach aiming atconnecting rough sets with BP neural network to establish aclassification systemThe section has five main parts and canbe elaborated through the following subsections
41 The System Construction Process The construction pro-cess of classification system can be shown as follows
4 Journal of Applied Mathematics
(1) Preprocessing historical dataset to obtain the sampledata is the application precondition of rough sets andBP neural network In the process of preprocessingthe historical dataset would be discretized throughPSO algorithm to construct the decision table
(2) Simplify the input of neural network including inputdimension and training set number through theattribute reduction based on significance degree algo-rithm Deleting the same row in the decision tablecan simplify the training samples and eliminating thesuperfluous column (condition attribute) can simplifythe network input dimension number Thus reducethe decision table using some reduct or sum of severalcalculated reducts that is remove from the tableattributes not belonging to the chosen reducts
(3) BP neural network integrated with rough sets is usedto build the network over the reduced set of dataDecision rules are extracted from the reduced deci-sion table as the basis of network structure through avalue reduction algorithm of rough sets
(4) Perform network learning for parameters 120596119894119895 120596119895119896 119886119895and 119887119896 Do this step until there is no significant changein the output of network and the system constructionprocess can be shown as Figure 2
42 The Discretization Algorithm for Decision Table Dueto the fact that rough sets theory cannot directly processcontinuous values of attributes the decision table composedof continuous values must be discretized firstly Thus thepositions of breakpoints must be selected appropriately todiscretize the attributes values so as to obtain fewer break-points larger dependency degree between attributes and lessredundant information
In this paper the basic particle swarm optimizationalgorithm (PSO) described in [28] is presented to optimizethe positions of breakpoints In PSO a swarm of particles areinitialized randomly in the solution space and each particlecan be updated in a certain rule to explore the optimalsolution 119875best after several iterations Particles are updatedthrough tracking two ldquoextremumsrdquo in each iteration Oneis the optimal solution 119875best found by the particle itself andanother is the optimal solution 119866best found by the particleswarm The specific iteration formulas can be expressed asfollows
V119899+1119894 = 120596119899V119899
119894 + 1198881119903 (119875best minus 119909119899
119894 ) + 1198882119903 (119866best minus 119909119899
119894 )
119909
119899+1
119894 = 119909
119899
119894 + V119899+1119894
(11)
where 119894 = 1 2 119872 119872 is the number of particles 119899 is thecurrent iteration times 1198881 and 1198882 are the acceleration factors119903 is the random number between 0 and 1 119909119899119894 is the currentposition of particle 119894 V119899119894 is the current update speed of particle119894 120596119899 is the current inertia weight and can be updated by thefollowing equation
120596119899 =(120596max minus 120596min) (119873max minus 119899)
119873max+ 120596min (12)
where 120596max and 120596min are the maximum and minimum ofinertia weight119873max is the number of maximum iterations
The main discretization steps for decision table throughthe PSO can be described as follows
(1) Normalize the attribute values and initialize thebreakpoints number ℎ themaximum iterations119873maxthe minimum dependency degree between conditionattribute and decision attribute 120574min the particlesnumber119872 the initial position 1199090119894 and initial velocityV0119894 which are ℎ times 120583 matrixes (120583 is the number ofattributes)
(2) Discretize the attribute values with the position ofparticle 119894 and calculate the dependency degree 120574119894 asthe fitness value of each particle to initialize 119875best and119866best
(3) Update 119909119894 V119894 (V119894 le Vmax) and 120596119899 Discretize theattribute values and calculate the fitness value of eachparticle so as to update 119875best and 119866best
(4) If 119899 gt 119873max or 120574119894 ge 120574min then the iteration is endedand 119866best or 119909119894 is output Otherwise go to step (3)
(5) According to the needs set different ℎ and returnto step (1) to search the optimal particles Thencompare the optimal particles and select the betterone as the basis of discretization for decision tableThemore detailed discretization process is elaboratedin Section 45
43 Rough Sets for the Attribution Reduction One of thefundamental steps in proposed method is reduction ofpattern dimensionality through feature extraction and featureselection [10] Rough sets theory provides tools for expressinginexact dependencies within data [29] Features may be irrel-evant (having no effect on the processing performance) andattribution reduction can neglect the redundant informationto enable the classification of more objects with a highaccuracy [30]
In this paper the algorithm based on attribution sig-nificance degree is applied in the attribution reduction ofdecision table The specific reduction steps can be describedas follows
(1) Construct the decision table DT = (119880 119862 cup 119863)
and calculate the relative core CORE119863(119862) of originaldecision table Initialize reduct = CORE119863(119862) andthe redundant attribute set redundant = 119862 minus
CORE119863(119862) = 1205721 1205722 120572120591 (120591 is the number ofattributes in redundant)
(2) If POSreduct(119863) = POS119862(119863) and for all 119877 sube reductPOS119877(119863) =POS119862(119863) then the set reduct is a relativereduct of decision table
(3) Otherwise calculate the significance degree of eachattribute in redundant respectively through the fol-lowing equation
sig (120572119895 reduct 119863) = 120574120572119895cup reduct (119863) minus 120574reduct (119863)
119895 = 1 2 120591
(13)
Journal of Applied Mathematics 5
Dataset
Preprocess and discretize
Constructdecision table
Original decision table
Attributionreduction
Reduced decision table
Train network
Optimum network
Classificationsystem
Extract rules
Build network
Reduction of decision table
Figure 2 The system construction process
The attribute in redundant with the maximum signif-icance degree is marked as 120572max Update the reductand redundant through the following equations
reduct = reduct cup 120572max
redundant = redundant minus 120572max (14)
(4) Go to step (2) until POSreduct(119863) satisfies the condi-tions then output RED119863(119862) = reduct
Therefore the irrelevant features are neglected and theDT is reduced Thus a reduced decision table will be con-structed and can be regarded as the training set to optimizethe structure of RS-BPNN classifier
44 Rough Sets for the Rules Extraction The knowledge inthe trained network is encoded in its connection weights andit is distributed numerically throughout the whole structureof neural network For the knowledge to be useful in thecontext of a classification system it has to be articulated ina symbolic form usually in the form of IF-THEN rules Sinceneural network is a ldquoblack boxrdquo for users the rules which areimplicit in the connection weights are difficult to understandso extracting rules from neural network is extremely toughbecause of the nonlinear and complicated nature of datatransformation conducted in the multiple hidden layers [31]In this paper the decision rules are indirectly extractedfrom the reduced decision table using the value reductionalgorithmof rough sets [32]This algorithm can be elaboratedin the following steps
Step 1 Checking the condition attributes of reduced decisiontable column by column If bringing conflicting objects afterdeleting one column then preserving the original attributevalue of these conflicting objects If not bringing conflictsbut containing duplicate objects then marking this attributevalue of duplicate objects as ldquolowastrdquo marking this attribute valueas ldquordquo for the other objects
Step 2 Deleting the possible duplicate objects and checkingevery object that contains the mark ldquordquo If the decisions canbe determined only by the unmarked attribute values thenldquordquo should be modified as ldquolowastrdquo or else as the original attributevalue If all condition attributes of a particular object aremarked then the attribute values marked with ldquordquo should bemodified as the original attribute value
Step 3 Deleting the objects whose condition attributes aremarked as ldquolowastrdquo and the possible duplicate objects
Step 4 If only one condition attribute value is differentbetween any two objects and this attribute value of one objectis marked as ldquolowastrdquo and for this object if the decision can bedetermined by the unmarked attribute values then deletinganother object or else deleting this object
Each object represents a classification rule in the decisiontable which is value-reduced through the above steps and thenumber of attributes which are not marked as ldquolowastrdquo of everyobject makes up the condition number of this rule Moreoverthis rule set can provide the rational basis for the structure ofBP neural network
45 The Flowchart of the Proposed Approach According tothe above description about the classifier based on integrationalgorithm of rough sets and BP neural network the proposedapproach is an iterative algorithm and can be coded easily ona computer and the flowchart can be summarized as shownin Figure 3
5 Simulation Example
A classifier named RS-BPNN (short for rough sets with BPneural network) has been set up and implemented by VC60 In this section an engineering application of shearerrunning status classification in a coalminewas put forward asa simulation example to verify the feasibility and effectivenessof the proposed approach The classification capabilities ofdifferent types of neural network models were compared andthe proposed approach was proved outperforming others
51 The Construction of Decision Table Due to the poorworking conditions of coal mining there are many parame-ters relating to the running status of shearermainly includingcutting motor current cutting motor temperature tractionmotor current traction motor temperature traction speedtilt angle of shearer body tilt angle of rocker and vibra-tion frequency of rocker transmission gearbox marked as1198621 1198622 1198623 1198624 1198625 1198626 1198627 and 1198628 respectively According tothe information database acquired from the 11070 coal face inthe number 13 Mine of Pingdingshan Coal Industry GroupCo 200 groups datasets of shearer running status wererearranged as shown in Table 1This ldquorunning statusrdquo referred
6 Journal of Applied Mathematics
Dataset
Begin
Yes
Discretizedecision table
Reduce decision table
RS-BPNN
Training set
Classificationsystem
End
Initialize reduct and redundant
Update reduct and redundant
Yes
No
Discretizedecision table
Attributionreduction
Extract rules
Determines l m
Output REDD(C)
120591 = 120591 minus 1
Search 120572max
Calculate CORED(C)
Construct DT = (U C cup D)
Set n = 0
Gbest i isin [1M]
n = n + 1
n le Nmax or120574i lt 120574min
n gt Nmax 120574i ge 120574min
Output Gbest Output xi
Initialize x0i 0i h Nmax 120574min
POSreduct (D) =POSC(D)
Calculate 120574i Pbest and
Update 120596 xi and i
M n c1 c2 r max 120596max and 120596min
Figure 3 The flowchart of the proposed approach
to the cutting status of shearer rocker and could be expressedby the ratio of actual cutting load to rated load
Therefore in decision table DT the condition attributeset and decision attribute set could be expressed as 119862 =
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 and 119863 = 119889 (119889 denoted theshearer running status) The values of decision attributionwere discretized based on the following assumptions for theratio of actual cutting load to rated load of shearer beinggreater than 16 119889 = 4 for the range of this ratio being ininterval of (12 16] 119889 = 3 for the range of this ratio being ininterval of (08 12] 119889 = 2 for this ratio being less than 08119889 = 1 The discretized values of condition attributes could bedetermined through the algorithm described in Section 42
The condition attributes to be discretized were 1198621 sim 1198628so 120583 = 8 Other parameters of PSOwere initialized as follows119872 = 100 119873max = 200 120574min = 09 Vmax = 06 1198881 =
1198882 = 16 119903 is a random number from [0 1] 120596max = 09and 120596min = 04 As the breakpoints number ℎ had greaterinfluence on the dependency degree 120574119862(119863) of discretizeddecision table the comparison of two important parametersdependency degree and simulation time was analyzed whenℎ was assigned various values The comparison results wereshown in Figure 4
From Figure 4 it was observed that the dependencydegree and simulation time both showed ascending tendencywith the increase of breakpoints number ℎ The dependency
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Journal of Applied Mathematics
decision rules which are extracted by the algorithm based onvalue reduction for classification problems
The paper is structured as follows Some related worksare outlined based on literature in Section 2 The basics ofrough sets theory and BP neural network are presented inSection 3 The framework and key algorithms are proposedand the flowchart of the proposed approach is designed inSection 4 A simulation example and some comparisons areput forward to validate the proposed approach in Section 5Our conclusions are summarized in Section 6
2 Literature Review
In recent years most of classifier architectures are mainlyconstructed based on artificial intelligence (AI) algorithmswith the unceasing development and improvement of AItheory In this section we try to summarize some recentliteratures which are relevant to the construction methodsof classification system Hassan et al constructed a classifierbased on neural networks and applied rough sets theoryin attributes reduction to preprocess the training set ofneural networks [15] Berardi et al proposed a principledapproach to build and evaluate neural network classificationmodels for the implementation of decision support system(DSS) and verified the optimization speed and classifiedsamples accuracy [16] In [17 18] the architecture of neuralnetworkwith fuzzy input was proposed to effectively improvethe classification performance of neural network Hu et alpresented a self-organized feature mapping neural networkto reasonably determine the input parameters in pavementstructure design [19] In [20 21] wavelet neural networkswere applied in the construction of classification system andbetter prediction results were obtained In [22 23] a classifierin which improved particle warm algorithm was used tooptimize the parameters of wavelet neural network wasestablished and the application in nonlinear identificationproblems demonstrated its strong generalization capabilitySengur et al described the usage of wavelet packet neuralnetworks for texture classification problem and provided awavelet packet feature extractor and a multilayer perceptronclassifier [24] Hassan presented a novel classifier architecturebased on rough sets and dynamic scaling in connection withwavelet neural networks and the effectiveness was verified bythe experiments [25]
Although many approaches for the establishment ofclassification systems have been presented in above litera-tures they have some common disadvantages summarizedas follows On one hand the classification effect of signalneural network cannot be guaranteed when the networksare applied in multidimensional and uncertain fields On theother hand the combinationmethod of rough sets and neuralnetworks can only deal with classification and recognitionproblems that possess discretized dataset and cannot processthe problems that contain continuous dataset
In this paper a novel classification system based onthe integration of rough sets and BP neural network isproposedThe attribution values are discretized through PSOand the decision table is constructed The proposed model
takes account of attributes reduction by discarding redundantattributes and a rule set is generated from the decision tableA simulation example and comparisons with other methodsare carried out and the proposed approach is proved feasibleand outperforming others
3 Basic Theory
31 Rough Sets Theory Fundamental to rough sets theoryis the idea of an information system IS = (119880 119862) which isessentially a finite data table consisting of different columnslabeled by attributes and rows labeled by objects of interestand entries of the table are attributersquos values 119880 is nonemptyfinite set of objects called a universe and C is a nonemptyfinite where each 119888 isin 119862 is called condition attribute InIS = (119880 119862) every119861 sube 119862 generates an indiscernibility relationInd(119861) on 119880 which can be defined as follows
Ind (119861) = (119909 119910) isin 119880 times 119880 119887 (119909) = 119887 (119910) forall119887 isin 119861 (1)
119880Ind(119861) is a partition of 119880 by 119861 For all 119909 isin 119880 theequivalence class of 119909 in relation 119880Ind(119861) can be defined asfollows
[119909]Ind(119861) = 119909 isin 119880 119887 (119910) = 119887 (119909) forall119887 isin 119861 (2)
According to 119868 two crisp sets 119861119883 and119861119883 called the lowerand the upper approximation of the set of objects 119883 can bedefined as follows
119861119883 = 119909 isin 119880 [119909]Ind(119861) sube 119880
119861119883 = 119909 isin 119880 [119909]Ind(119861) cap 119883 =Φ
(3)
where [119909]Ind(119861) denotes the set of all equivalence classes ofInd(119861) 119861119883 is the set of all objects from 119880 which can becertainly classified as elements of 119883 employing the set ofattributes 119861 119861119883 is the set of objects of 119880 which can bepossibly classified as elements of119883 using the set of attributes119861 [25]
Decision table 119863119879 = (119880 119862 cup 119863) is a special form of aninformation system and themajor feature is the distinguishedattribute set 119863 where 119862 cap 119863 = Φ is called the deci-sion attribute Generally speaking there is a certain degreeof dependency between condition attribute and decisionattribute which can be defined as follows
120574119862 (119863) =POS119862 (119863)
|119880|
=
10038161003816100381610038161003816
⋃119883isin119880119863CX10038161003816100381610038161003816
|119880|
(4)
where POS119862(119863) is referred to the 119862-positive region of119863Due to the relevance between condition attribute and
decision attribute not all condition attributes are necessaryfor decision attribute so as to introduce the attribute reduc-tionwhich is a smaller set of attributes that can classify objectswith the same discriminating capability as that of the originalset As well known the reduct is not the only one [26 27]
When we want to determine the reduct of decision tablethe significance degree of attributes is commonly used andcan be defined as follows
sig (120572 119861 119863) = 120574120572cup119861 (119863) minus 120574119861 (119863) (5)
Journal of Applied Mathematics 3
Input layer Hidden layer Output layer
x1
x2
xs
H1
H2
Hl
o1
o2
om
120596ij 120596jk
Figure 1 The topology structure of BP-NN
where sig (120572 119861119863)denotes the significance degree of attribute120572 to attribute 119861 relative to decision attribute119863
For all 119877 isin 119862 if POS119862minus119877(119863) = POS119862(119863) then 119877 in 119862
is superfluous for 119863 Otherwise 119877 in 119862 is indispensable for119863 The set composed of all attributes that are indispensablefor119863is called the core of119862 relative to119863 namely relative coreCORE119863(119862) This relative core cannot be removed from thesystem without a loss in the knowledge that can be derivedfrom it
When determined the relative core of decision tablemustbe firstly judged whether it is the reduct of 119862 relative to 119863namely relative reduct The judgment rules are shown asfollows
If the nonempty subset 119875 of attribute set 119862 satisfiesthe conditions (1) POS119875(119863) = POS119862(119863) (2) forall119877 sube 119875POS119877(119863) =POS119862(119863) then 119875 is called a reduct of 119862 relativeto119863 denoted as RED119863(119862)
In addition decision rules can be perceived as data pat-terns which represent the relationship between the attributesvalues of a classification system [11 12] The form of thedecision rules can be shown as IF 120578 THEN 120575 where 120578 isthe conditional part of a rule set and it is a conjunction ofselectors 120575 is the decision part of attribute 119889 (119889 isin 119863) and itusually describes the class of decision attribution
32 BP Neural Network BP neural network (BP-NN) whichis surely a back-propagating neural network belongs to theclass of networks whose learning is supervised and thelearning rules are provided by the training set to describe thenetwork behavior The topology structure of BP-NN can beshown in Figure 1
In Figure 1 input vector 119883 = 1199091 1199092 119909119904 is furnishedby the condition attribution values of reduced decision tableand output vector 119874 = 1199001 1199002 119900119898 is the prediction classof decision attribution The output of hidden layer can becalculated as follows
119867119895 = 119891(
119904
sum
119894=1
120596119894119895119909119894 minus 119886119895) 119895 = 1 2 119897 (6)
where120596119894119895 is the connection weight between input and hiddenlayers 119897 is the number of hidden layer nodes 119886119895 is thethreshold of hidden layer node 119895 119891 is the activation function
of hidden layer and can be chosen as linear function orsigmoid function 119891(119909) = 1(1 + 119890
minus119909)
The output of output layer can be calculated as follows
119900119896 =
119897
sum
119895=1
119867119895120596119895119896 minus 119887119896 119896 = 1 2 119898 (7)
where 120596119895119896 is the connection weight between hidden andoutput layers 119898 is the number of output layer nodes 119887119896 isthe threshold of output layer node 119896
The training of network parameters 120596119894119895 120596119895119896 119886119895 and 119887119896
would depend on the error value which can be calculated bythe following equation
119890119896 = 119884119896 minus 119900119896 119896 = 1 2 119898 (8)
where 119900119896 and 119884119896 are current and desired output values of thenetwork respectivelyTheweight values and thresholds of thenetwork can be updated as follows
120596119894119895 (119905 + 1) = 120596119894119895 (119905) + 120577 (119905)119867119895 (1 minus 119867119895) 119909119894
119897
sum
119895=1
119890119896120596119895119896
120596119895119896 (119905 + 1) = 120596119895119896 (119905) + 120577 (119905)119867119895119890119896
119886119895 (119905 + 1) = 119886119895 (119905) + 120577 (119905)119867119895 (1 minus 119867119895) 119909119894
119897
sum
119895=1
119890119896120596119895119896
119887119896 (119905 + 1) = 119887119896 (119905) + 119890119896
(9)
where 119905 is the current iteration times 120577(119905) is the learning rateand the range is the interval of [0 1]
The learning rate has great influence on the generalizationeffect of neural network Larger learning rate would signifi-cantly modify the parameters of weight values and thresholdsand increase the learning speed But overlarge learning ratewould produce larger fluctuations in the learning process andexcessively small learning ratewould generate slower networkconvergence and stabilize the weight values and thresholdsdifficultlyTherefore this paper presents the variable learningrate algorithm to solve the above problems which can bedescribed as follows
120577 (119905) = 120577max minus119905 (120577max minus 120577min)
119905max (10)
where 120577max and 120577min are the maximum and minimum oflearning rate 119905max is the number of maximum iterations
4 The Proposed Approach
This section tries to present a new approach aiming atconnecting rough sets with BP neural network to establish aclassification systemThe section has five main parts and canbe elaborated through the following subsections
41 The System Construction Process The construction pro-cess of classification system can be shown as follows
4 Journal of Applied Mathematics
(1) Preprocessing historical dataset to obtain the sampledata is the application precondition of rough sets andBP neural network In the process of preprocessingthe historical dataset would be discretized throughPSO algorithm to construct the decision table
(2) Simplify the input of neural network including inputdimension and training set number through theattribute reduction based on significance degree algo-rithm Deleting the same row in the decision tablecan simplify the training samples and eliminating thesuperfluous column (condition attribute) can simplifythe network input dimension number Thus reducethe decision table using some reduct or sum of severalcalculated reducts that is remove from the tableattributes not belonging to the chosen reducts
(3) BP neural network integrated with rough sets is usedto build the network over the reduced set of dataDecision rules are extracted from the reduced deci-sion table as the basis of network structure through avalue reduction algorithm of rough sets
(4) Perform network learning for parameters 120596119894119895 120596119895119896 119886119895and 119887119896 Do this step until there is no significant changein the output of network and the system constructionprocess can be shown as Figure 2
42 The Discretization Algorithm for Decision Table Dueto the fact that rough sets theory cannot directly processcontinuous values of attributes the decision table composedof continuous values must be discretized firstly Thus thepositions of breakpoints must be selected appropriately todiscretize the attributes values so as to obtain fewer break-points larger dependency degree between attributes and lessredundant information
In this paper the basic particle swarm optimizationalgorithm (PSO) described in [28] is presented to optimizethe positions of breakpoints In PSO a swarm of particles areinitialized randomly in the solution space and each particlecan be updated in a certain rule to explore the optimalsolution 119875best after several iterations Particles are updatedthrough tracking two ldquoextremumsrdquo in each iteration Oneis the optimal solution 119875best found by the particle itself andanother is the optimal solution 119866best found by the particleswarm The specific iteration formulas can be expressed asfollows
V119899+1119894 = 120596119899V119899
119894 + 1198881119903 (119875best minus 119909119899
119894 ) + 1198882119903 (119866best minus 119909119899
119894 )
119909
119899+1
119894 = 119909
119899
119894 + V119899+1119894
(11)
where 119894 = 1 2 119872 119872 is the number of particles 119899 is thecurrent iteration times 1198881 and 1198882 are the acceleration factors119903 is the random number between 0 and 1 119909119899119894 is the currentposition of particle 119894 V119899119894 is the current update speed of particle119894 120596119899 is the current inertia weight and can be updated by thefollowing equation
120596119899 =(120596max minus 120596min) (119873max minus 119899)
119873max+ 120596min (12)
where 120596max and 120596min are the maximum and minimum ofinertia weight119873max is the number of maximum iterations
The main discretization steps for decision table throughthe PSO can be described as follows
(1) Normalize the attribute values and initialize thebreakpoints number ℎ themaximum iterations119873maxthe minimum dependency degree between conditionattribute and decision attribute 120574min the particlesnumber119872 the initial position 1199090119894 and initial velocityV0119894 which are ℎ times 120583 matrixes (120583 is the number ofattributes)
(2) Discretize the attribute values with the position ofparticle 119894 and calculate the dependency degree 120574119894 asthe fitness value of each particle to initialize 119875best and119866best
(3) Update 119909119894 V119894 (V119894 le Vmax) and 120596119899 Discretize theattribute values and calculate the fitness value of eachparticle so as to update 119875best and 119866best
(4) If 119899 gt 119873max or 120574119894 ge 120574min then the iteration is endedand 119866best or 119909119894 is output Otherwise go to step (3)
(5) According to the needs set different ℎ and returnto step (1) to search the optimal particles Thencompare the optimal particles and select the betterone as the basis of discretization for decision tableThemore detailed discretization process is elaboratedin Section 45
43 Rough Sets for the Attribution Reduction One of thefundamental steps in proposed method is reduction ofpattern dimensionality through feature extraction and featureselection [10] Rough sets theory provides tools for expressinginexact dependencies within data [29] Features may be irrel-evant (having no effect on the processing performance) andattribution reduction can neglect the redundant informationto enable the classification of more objects with a highaccuracy [30]
In this paper the algorithm based on attribution sig-nificance degree is applied in the attribution reduction ofdecision table The specific reduction steps can be describedas follows
(1) Construct the decision table DT = (119880 119862 cup 119863)
and calculate the relative core CORE119863(119862) of originaldecision table Initialize reduct = CORE119863(119862) andthe redundant attribute set redundant = 119862 minus
CORE119863(119862) = 1205721 1205722 120572120591 (120591 is the number ofattributes in redundant)
(2) If POSreduct(119863) = POS119862(119863) and for all 119877 sube reductPOS119877(119863) =POS119862(119863) then the set reduct is a relativereduct of decision table
(3) Otherwise calculate the significance degree of eachattribute in redundant respectively through the fol-lowing equation
sig (120572119895 reduct 119863) = 120574120572119895cup reduct (119863) minus 120574reduct (119863)
119895 = 1 2 120591
(13)
Journal of Applied Mathematics 5
Dataset
Preprocess and discretize
Constructdecision table
Original decision table
Attributionreduction
Reduced decision table
Train network
Optimum network
Classificationsystem
Extract rules
Build network
Reduction of decision table
Figure 2 The system construction process
The attribute in redundant with the maximum signif-icance degree is marked as 120572max Update the reductand redundant through the following equations
reduct = reduct cup 120572max
redundant = redundant minus 120572max (14)
(4) Go to step (2) until POSreduct(119863) satisfies the condi-tions then output RED119863(119862) = reduct
Therefore the irrelevant features are neglected and theDT is reduced Thus a reduced decision table will be con-structed and can be regarded as the training set to optimizethe structure of RS-BPNN classifier
44 Rough Sets for the Rules Extraction The knowledge inthe trained network is encoded in its connection weights andit is distributed numerically throughout the whole structureof neural network For the knowledge to be useful in thecontext of a classification system it has to be articulated ina symbolic form usually in the form of IF-THEN rules Sinceneural network is a ldquoblack boxrdquo for users the rules which areimplicit in the connection weights are difficult to understandso extracting rules from neural network is extremely toughbecause of the nonlinear and complicated nature of datatransformation conducted in the multiple hidden layers [31]In this paper the decision rules are indirectly extractedfrom the reduced decision table using the value reductionalgorithmof rough sets [32]This algorithm can be elaboratedin the following steps
Step 1 Checking the condition attributes of reduced decisiontable column by column If bringing conflicting objects afterdeleting one column then preserving the original attributevalue of these conflicting objects If not bringing conflictsbut containing duplicate objects then marking this attributevalue of duplicate objects as ldquolowastrdquo marking this attribute valueas ldquordquo for the other objects
Step 2 Deleting the possible duplicate objects and checkingevery object that contains the mark ldquordquo If the decisions canbe determined only by the unmarked attribute values thenldquordquo should be modified as ldquolowastrdquo or else as the original attributevalue If all condition attributes of a particular object aremarked then the attribute values marked with ldquordquo should bemodified as the original attribute value
Step 3 Deleting the objects whose condition attributes aremarked as ldquolowastrdquo and the possible duplicate objects
Step 4 If only one condition attribute value is differentbetween any two objects and this attribute value of one objectis marked as ldquolowastrdquo and for this object if the decision can bedetermined by the unmarked attribute values then deletinganother object or else deleting this object
Each object represents a classification rule in the decisiontable which is value-reduced through the above steps and thenumber of attributes which are not marked as ldquolowastrdquo of everyobject makes up the condition number of this rule Moreoverthis rule set can provide the rational basis for the structure ofBP neural network
45 The Flowchart of the Proposed Approach According tothe above description about the classifier based on integrationalgorithm of rough sets and BP neural network the proposedapproach is an iterative algorithm and can be coded easily ona computer and the flowchart can be summarized as shownin Figure 3
5 Simulation Example
A classifier named RS-BPNN (short for rough sets with BPneural network) has been set up and implemented by VC60 In this section an engineering application of shearerrunning status classification in a coalminewas put forward asa simulation example to verify the feasibility and effectivenessof the proposed approach The classification capabilities ofdifferent types of neural network models were compared andthe proposed approach was proved outperforming others
51 The Construction of Decision Table Due to the poorworking conditions of coal mining there are many parame-ters relating to the running status of shearermainly includingcutting motor current cutting motor temperature tractionmotor current traction motor temperature traction speedtilt angle of shearer body tilt angle of rocker and vibra-tion frequency of rocker transmission gearbox marked as1198621 1198622 1198623 1198624 1198625 1198626 1198627 and 1198628 respectively According tothe information database acquired from the 11070 coal face inthe number 13 Mine of Pingdingshan Coal Industry GroupCo 200 groups datasets of shearer running status wererearranged as shown in Table 1This ldquorunning statusrdquo referred
6 Journal of Applied Mathematics
Dataset
Begin
Yes
Discretizedecision table
Reduce decision table
RS-BPNN
Training set
Classificationsystem
End
Initialize reduct and redundant
Update reduct and redundant
Yes
No
Discretizedecision table
Attributionreduction
Extract rules
Determines l m
Output REDD(C)
120591 = 120591 minus 1
Search 120572max
Calculate CORED(C)
Construct DT = (U C cup D)
Set n = 0
Gbest i isin [1M]
n = n + 1
n le Nmax or120574i lt 120574min
n gt Nmax 120574i ge 120574min
Output Gbest Output xi
Initialize x0i 0i h Nmax 120574min
POSreduct (D) =POSC(D)
Calculate 120574i Pbest and
Update 120596 xi and i
M n c1 c2 r max 120596max and 120596min
Figure 3 The flowchart of the proposed approach
to the cutting status of shearer rocker and could be expressedby the ratio of actual cutting load to rated load
Therefore in decision table DT the condition attributeset and decision attribute set could be expressed as 119862 =
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 and 119863 = 119889 (119889 denoted theshearer running status) The values of decision attributionwere discretized based on the following assumptions for theratio of actual cutting load to rated load of shearer beinggreater than 16 119889 = 4 for the range of this ratio being ininterval of (12 16] 119889 = 3 for the range of this ratio being ininterval of (08 12] 119889 = 2 for this ratio being less than 08119889 = 1 The discretized values of condition attributes could bedetermined through the algorithm described in Section 42
The condition attributes to be discretized were 1198621 sim 1198628so 120583 = 8 Other parameters of PSOwere initialized as follows119872 = 100 119873max = 200 120574min = 09 Vmax = 06 1198881 =
1198882 = 16 119903 is a random number from [0 1] 120596max = 09and 120596min = 04 As the breakpoints number ℎ had greaterinfluence on the dependency degree 120574119862(119863) of discretizeddecision table the comparison of two important parametersdependency degree and simulation time was analyzed whenℎ was assigned various values The comparison results wereshown in Figure 4
From Figure 4 it was observed that the dependencydegree and simulation time both showed ascending tendencywith the increase of breakpoints number ℎ The dependency
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 3
Input layer Hidden layer Output layer
x1
x2
xs
H1
H2
Hl
o1
o2
om
120596ij 120596jk
Figure 1 The topology structure of BP-NN
where sig (120572 119861119863)denotes the significance degree of attribute120572 to attribute 119861 relative to decision attribute119863
For all 119877 isin 119862 if POS119862minus119877(119863) = POS119862(119863) then 119877 in 119862
is superfluous for 119863 Otherwise 119877 in 119862 is indispensable for119863 The set composed of all attributes that are indispensablefor119863is called the core of119862 relative to119863 namely relative coreCORE119863(119862) This relative core cannot be removed from thesystem without a loss in the knowledge that can be derivedfrom it
When determined the relative core of decision tablemustbe firstly judged whether it is the reduct of 119862 relative to 119863namely relative reduct The judgment rules are shown asfollows
If the nonempty subset 119875 of attribute set 119862 satisfiesthe conditions (1) POS119875(119863) = POS119862(119863) (2) forall119877 sube 119875POS119877(119863) =POS119862(119863) then 119875 is called a reduct of 119862 relativeto119863 denoted as RED119863(119862)
In addition decision rules can be perceived as data pat-terns which represent the relationship between the attributesvalues of a classification system [11 12] The form of thedecision rules can be shown as IF 120578 THEN 120575 where 120578 isthe conditional part of a rule set and it is a conjunction ofselectors 120575 is the decision part of attribute 119889 (119889 isin 119863) and itusually describes the class of decision attribution
32 BP Neural Network BP neural network (BP-NN) whichis surely a back-propagating neural network belongs to theclass of networks whose learning is supervised and thelearning rules are provided by the training set to describe thenetwork behavior The topology structure of BP-NN can beshown in Figure 1
In Figure 1 input vector 119883 = 1199091 1199092 119909119904 is furnishedby the condition attribution values of reduced decision tableand output vector 119874 = 1199001 1199002 119900119898 is the prediction classof decision attribution The output of hidden layer can becalculated as follows
119867119895 = 119891(
119904
sum
119894=1
120596119894119895119909119894 minus 119886119895) 119895 = 1 2 119897 (6)
where120596119894119895 is the connection weight between input and hiddenlayers 119897 is the number of hidden layer nodes 119886119895 is thethreshold of hidden layer node 119895 119891 is the activation function
of hidden layer and can be chosen as linear function orsigmoid function 119891(119909) = 1(1 + 119890
minus119909)
The output of output layer can be calculated as follows
119900119896 =
119897
sum
119895=1
119867119895120596119895119896 minus 119887119896 119896 = 1 2 119898 (7)
where 120596119895119896 is the connection weight between hidden andoutput layers 119898 is the number of output layer nodes 119887119896 isthe threshold of output layer node 119896
The training of network parameters 120596119894119895 120596119895119896 119886119895 and 119887119896
would depend on the error value which can be calculated bythe following equation
119890119896 = 119884119896 minus 119900119896 119896 = 1 2 119898 (8)
where 119900119896 and 119884119896 are current and desired output values of thenetwork respectivelyTheweight values and thresholds of thenetwork can be updated as follows
120596119894119895 (119905 + 1) = 120596119894119895 (119905) + 120577 (119905)119867119895 (1 minus 119867119895) 119909119894
119897
sum
119895=1
119890119896120596119895119896
120596119895119896 (119905 + 1) = 120596119895119896 (119905) + 120577 (119905)119867119895119890119896
119886119895 (119905 + 1) = 119886119895 (119905) + 120577 (119905)119867119895 (1 minus 119867119895) 119909119894
119897
sum
119895=1
119890119896120596119895119896
119887119896 (119905 + 1) = 119887119896 (119905) + 119890119896
(9)
where 119905 is the current iteration times 120577(119905) is the learning rateand the range is the interval of [0 1]
The learning rate has great influence on the generalizationeffect of neural network Larger learning rate would signifi-cantly modify the parameters of weight values and thresholdsand increase the learning speed But overlarge learning ratewould produce larger fluctuations in the learning process andexcessively small learning ratewould generate slower networkconvergence and stabilize the weight values and thresholdsdifficultlyTherefore this paper presents the variable learningrate algorithm to solve the above problems which can bedescribed as follows
120577 (119905) = 120577max minus119905 (120577max minus 120577min)
119905max (10)
where 120577max and 120577min are the maximum and minimum oflearning rate 119905max is the number of maximum iterations
4 The Proposed Approach
This section tries to present a new approach aiming atconnecting rough sets with BP neural network to establish aclassification systemThe section has five main parts and canbe elaborated through the following subsections
41 The System Construction Process The construction pro-cess of classification system can be shown as follows
4 Journal of Applied Mathematics
(1) Preprocessing historical dataset to obtain the sampledata is the application precondition of rough sets andBP neural network In the process of preprocessingthe historical dataset would be discretized throughPSO algorithm to construct the decision table
(2) Simplify the input of neural network including inputdimension and training set number through theattribute reduction based on significance degree algo-rithm Deleting the same row in the decision tablecan simplify the training samples and eliminating thesuperfluous column (condition attribute) can simplifythe network input dimension number Thus reducethe decision table using some reduct or sum of severalcalculated reducts that is remove from the tableattributes not belonging to the chosen reducts
(3) BP neural network integrated with rough sets is usedto build the network over the reduced set of dataDecision rules are extracted from the reduced deci-sion table as the basis of network structure through avalue reduction algorithm of rough sets
(4) Perform network learning for parameters 120596119894119895 120596119895119896 119886119895and 119887119896 Do this step until there is no significant changein the output of network and the system constructionprocess can be shown as Figure 2
42 The Discretization Algorithm for Decision Table Dueto the fact that rough sets theory cannot directly processcontinuous values of attributes the decision table composedof continuous values must be discretized firstly Thus thepositions of breakpoints must be selected appropriately todiscretize the attributes values so as to obtain fewer break-points larger dependency degree between attributes and lessredundant information
In this paper the basic particle swarm optimizationalgorithm (PSO) described in [28] is presented to optimizethe positions of breakpoints In PSO a swarm of particles areinitialized randomly in the solution space and each particlecan be updated in a certain rule to explore the optimalsolution 119875best after several iterations Particles are updatedthrough tracking two ldquoextremumsrdquo in each iteration Oneis the optimal solution 119875best found by the particle itself andanother is the optimal solution 119866best found by the particleswarm The specific iteration formulas can be expressed asfollows
V119899+1119894 = 120596119899V119899
119894 + 1198881119903 (119875best minus 119909119899
119894 ) + 1198882119903 (119866best minus 119909119899
119894 )
119909
119899+1
119894 = 119909
119899
119894 + V119899+1119894
(11)
where 119894 = 1 2 119872 119872 is the number of particles 119899 is thecurrent iteration times 1198881 and 1198882 are the acceleration factors119903 is the random number between 0 and 1 119909119899119894 is the currentposition of particle 119894 V119899119894 is the current update speed of particle119894 120596119899 is the current inertia weight and can be updated by thefollowing equation
120596119899 =(120596max minus 120596min) (119873max minus 119899)
119873max+ 120596min (12)
where 120596max and 120596min are the maximum and minimum ofinertia weight119873max is the number of maximum iterations
The main discretization steps for decision table throughthe PSO can be described as follows
(1) Normalize the attribute values and initialize thebreakpoints number ℎ themaximum iterations119873maxthe minimum dependency degree between conditionattribute and decision attribute 120574min the particlesnumber119872 the initial position 1199090119894 and initial velocityV0119894 which are ℎ times 120583 matrixes (120583 is the number ofattributes)
(2) Discretize the attribute values with the position ofparticle 119894 and calculate the dependency degree 120574119894 asthe fitness value of each particle to initialize 119875best and119866best
(3) Update 119909119894 V119894 (V119894 le Vmax) and 120596119899 Discretize theattribute values and calculate the fitness value of eachparticle so as to update 119875best and 119866best
(4) If 119899 gt 119873max or 120574119894 ge 120574min then the iteration is endedand 119866best or 119909119894 is output Otherwise go to step (3)
(5) According to the needs set different ℎ and returnto step (1) to search the optimal particles Thencompare the optimal particles and select the betterone as the basis of discretization for decision tableThemore detailed discretization process is elaboratedin Section 45
43 Rough Sets for the Attribution Reduction One of thefundamental steps in proposed method is reduction ofpattern dimensionality through feature extraction and featureselection [10] Rough sets theory provides tools for expressinginexact dependencies within data [29] Features may be irrel-evant (having no effect on the processing performance) andattribution reduction can neglect the redundant informationto enable the classification of more objects with a highaccuracy [30]
In this paper the algorithm based on attribution sig-nificance degree is applied in the attribution reduction ofdecision table The specific reduction steps can be describedas follows
(1) Construct the decision table DT = (119880 119862 cup 119863)
and calculate the relative core CORE119863(119862) of originaldecision table Initialize reduct = CORE119863(119862) andthe redundant attribute set redundant = 119862 minus
CORE119863(119862) = 1205721 1205722 120572120591 (120591 is the number ofattributes in redundant)
(2) If POSreduct(119863) = POS119862(119863) and for all 119877 sube reductPOS119877(119863) =POS119862(119863) then the set reduct is a relativereduct of decision table
(3) Otherwise calculate the significance degree of eachattribute in redundant respectively through the fol-lowing equation
sig (120572119895 reduct 119863) = 120574120572119895cup reduct (119863) minus 120574reduct (119863)
119895 = 1 2 120591
(13)
Journal of Applied Mathematics 5
Dataset
Preprocess and discretize
Constructdecision table
Original decision table
Attributionreduction
Reduced decision table
Train network
Optimum network
Classificationsystem
Extract rules
Build network
Reduction of decision table
Figure 2 The system construction process
The attribute in redundant with the maximum signif-icance degree is marked as 120572max Update the reductand redundant through the following equations
reduct = reduct cup 120572max
redundant = redundant minus 120572max (14)
(4) Go to step (2) until POSreduct(119863) satisfies the condi-tions then output RED119863(119862) = reduct
Therefore the irrelevant features are neglected and theDT is reduced Thus a reduced decision table will be con-structed and can be regarded as the training set to optimizethe structure of RS-BPNN classifier
44 Rough Sets for the Rules Extraction The knowledge inthe trained network is encoded in its connection weights andit is distributed numerically throughout the whole structureof neural network For the knowledge to be useful in thecontext of a classification system it has to be articulated ina symbolic form usually in the form of IF-THEN rules Sinceneural network is a ldquoblack boxrdquo for users the rules which areimplicit in the connection weights are difficult to understandso extracting rules from neural network is extremely toughbecause of the nonlinear and complicated nature of datatransformation conducted in the multiple hidden layers [31]In this paper the decision rules are indirectly extractedfrom the reduced decision table using the value reductionalgorithmof rough sets [32]This algorithm can be elaboratedin the following steps
Step 1 Checking the condition attributes of reduced decisiontable column by column If bringing conflicting objects afterdeleting one column then preserving the original attributevalue of these conflicting objects If not bringing conflictsbut containing duplicate objects then marking this attributevalue of duplicate objects as ldquolowastrdquo marking this attribute valueas ldquordquo for the other objects
Step 2 Deleting the possible duplicate objects and checkingevery object that contains the mark ldquordquo If the decisions canbe determined only by the unmarked attribute values thenldquordquo should be modified as ldquolowastrdquo or else as the original attributevalue If all condition attributes of a particular object aremarked then the attribute values marked with ldquordquo should bemodified as the original attribute value
Step 3 Deleting the objects whose condition attributes aremarked as ldquolowastrdquo and the possible duplicate objects
Step 4 If only one condition attribute value is differentbetween any two objects and this attribute value of one objectis marked as ldquolowastrdquo and for this object if the decision can bedetermined by the unmarked attribute values then deletinganother object or else deleting this object
Each object represents a classification rule in the decisiontable which is value-reduced through the above steps and thenumber of attributes which are not marked as ldquolowastrdquo of everyobject makes up the condition number of this rule Moreoverthis rule set can provide the rational basis for the structure ofBP neural network
45 The Flowchart of the Proposed Approach According tothe above description about the classifier based on integrationalgorithm of rough sets and BP neural network the proposedapproach is an iterative algorithm and can be coded easily ona computer and the flowchart can be summarized as shownin Figure 3
5 Simulation Example
A classifier named RS-BPNN (short for rough sets with BPneural network) has been set up and implemented by VC60 In this section an engineering application of shearerrunning status classification in a coalminewas put forward asa simulation example to verify the feasibility and effectivenessof the proposed approach The classification capabilities ofdifferent types of neural network models were compared andthe proposed approach was proved outperforming others
51 The Construction of Decision Table Due to the poorworking conditions of coal mining there are many parame-ters relating to the running status of shearermainly includingcutting motor current cutting motor temperature tractionmotor current traction motor temperature traction speedtilt angle of shearer body tilt angle of rocker and vibra-tion frequency of rocker transmission gearbox marked as1198621 1198622 1198623 1198624 1198625 1198626 1198627 and 1198628 respectively According tothe information database acquired from the 11070 coal face inthe number 13 Mine of Pingdingshan Coal Industry GroupCo 200 groups datasets of shearer running status wererearranged as shown in Table 1This ldquorunning statusrdquo referred
6 Journal of Applied Mathematics
Dataset
Begin
Yes
Discretizedecision table
Reduce decision table
RS-BPNN
Training set
Classificationsystem
End
Initialize reduct and redundant
Update reduct and redundant
Yes
No
Discretizedecision table
Attributionreduction
Extract rules
Determines l m
Output REDD(C)
120591 = 120591 minus 1
Search 120572max
Calculate CORED(C)
Construct DT = (U C cup D)
Set n = 0
Gbest i isin [1M]
n = n + 1
n le Nmax or120574i lt 120574min
n gt Nmax 120574i ge 120574min
Output Gbest Output xi
Initialize x0i 0i h Nmax 120574min
POSreduct (D) =POSC(D)
Calculate 120574i Pbest and
Update 120596 xi and i
M n c1 c2 r max 120596max and 120596min
Figure 3 The flowchart of the proposed approach
to the cutting status of shearer rocker and could be expressedby the ratio of actual cutting load to rated load
Therefore in decision table DT the condition attributeset and decision attribute set could be expressed as 119862 =
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 and 119863 = 119889 (119889 denoted theshearer running status) The values of decision attributionwere discretized based on the following assumptions for theratio of actual cutting load to rated load of shearer beinggreater than 16 119889 = 4 for the range of this ratio being ininterval of (12 16] 119889 = 3 for the range of this ratio being ininterval of (08 12] 119889 = 2 for this ratio being less than 08119889 = 1 The discretized values of condition attributes could bedetermined through the algorithm described in Section 42
The condition attributes to be discretized were 1198621 sim 1198628so 120583 = 8 Other parameters of PSOwere initialized as follows119872 = 100 119873max = 200 120574min = 09 Vmax = 06 1198881 =
1198882 = 16 119903 is a random number from [0 1] 120596max = 09and 120596min = 04 As the breakpoints number ℎ had greaterinfluence on the dependency degree 120574119862(119863) of discretizeddecision table the comparison of two important parametersdependency degree and simulation time was analyzed whenℎ was assigned various values The comparison results wereshown in Figure 4
From Figure 4 it was observed that the dependencydegree and simulation time both showed ascending tendencywith the increase of breakpoints number ℎ The dependency
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Journal of Applied Mathematics
(1) Preprocessing historical dataset to obtain the sampledata is the application precondition of rough sets andBP neural network In the process of preprocessingthe historical dataset would be discretized throughPSO algorithm to construct the decision table
(2) Simplify the input of neural network including inputdimension and training set number through theattribute reduction based on significance degree algo-rithm Deleting the same row in the decision tablecan simplify the training samples and eliminating thesuperfluous column (condition attribute) can simplifythe network input dimension number Thus reducethe decision table using some reduct or sum of severalcalculated reducts that is remove from the tableattributes not belonging to the chosen reducts
(3) BP neural network integrated with rough sets is usedto build the network over the reduced set of dataDecision rules are extracted from the reduced deci-sion table as the basis of network structure through avalue reduction algorithm of rough sets
(4) Perform network learning for parameters 120596119894119895 120596119895119896 119886119895and 119887119896 Do this step until there is no significant changein the output of network and the system constructionprocess can be shown as Figure 2
42 The Discretization Algorithm for Decision Table Dueto the fact that rough sets theory cannot directly processcontinuous values of attributes the decision table composedof continuous values must be discretized firstly Thus thepositions of breakpoints must be selected appropriately todiscretize the attributes values so as to obtain fewer break-points larger dependency degree between attributes and lessredundant information
In this paper the basic particle swarm optimizationalgorithm (PSO) described in [28] is presented to optimizethe positions of breakpoints In PSO a swarm of particles areinitialized randomly in the solution space and each particlecan be updated in a certain rule to explore the optimalsolution 119875best after several iterations Particles are updatedthrough tracking two ldquoextremumsrdquo in each iteration Oneis the optimal solution 119875best found by the particle itself andanother is the optimal solution 119866best found by the particleswarm The specific iteration formulas can be expressed asfollows
V119899+1119894 = 120596119899V119899
119894 + 1198881119903 (119875best minus 119909119899
119894 ) + 1198882119903 (119866best minus 119909119899
119894 )
119909
119899+1
119894 = 119909
119899
119894 + V119899+1119894
(11)
where 119894 = 1 2 119872 119872 is the number of particles 119899 is thecurrent iteration times 1198881 and 1198882 are the acceleration factors119903 is the random number between 0 and 1 119909119899119894 is the currentposition of particle 119894 V119899119894 is the current update speed of particle119894 120596119899 is the current inertia weight and can be updated by thefollowing equation
120596119899 =(120596max minus 120596min) (119873max minus 119899)
119873max+ 120596min (12)
where 120596max and 120596min are the maximum and minimum ofinertia weight119873max is the number of maximum iterations
The main discretization steps for decision table throughthe PSO can be described as follows
(1) Normalize the attribute values and initialize thebreakpoints number ℎ themaximum iterations119873maxthe minimum dependency degree between conditionattribute and decision attribute 120574min the particlesnumber119872 the initial position 1199090119894 and initial velocityV0119894 which are ℎ times 120583 matrixes (120583 is the number ofattributes)
(2) Discretize the attribute values with the position ofparticle 119894 and calculate the dependency degree 120574119894 asthe fitness value of each particle to initialize 119875best and119866best
(3) Update 119909119894 V119894 (V119894 le Vmax) and 120596119899 Discretize theattribute values and calculate the fitness value of eachparticle so as to update 119875best and 119866best
(4) If 119899 gt 119873max or 120574119894 ge 120574min then the iteration is endedand 119866best or 119909119894 is output Otherwise go to step (3)
(5) According to the needs set different ℎ and returnto step (1) to search the optimal particles Thencompare the optimal particles and select the betterone as the basis of discretization for decision tableThemore detailed discretization process is elaboratedin Section 45
43 Rough Sets for the Attribution Reduction One of thefundamental steps in proposed method is reduction ofpattern dimensionality through feature extraction and featureselection [10] Rough sets theory provides tools for expressinginexact dependencies within data [29] Features may be irrel-evant (having no effect on the processing performance) andattribution reduction can neglect the redundant informationto enable the classification of more objects with a highaccuracy [30]
In this paper the algorithm based on attribution sig-nificance degree is applied in the attribution reduction ofdecision table The specific reduction steps can be describedas follows
(1) Construct the decision table DT = (119880 119862 cup 119863)
and calculate the relative core CORE119863(119862) of originaldecision table Initialize reduct = CORE119863(119862) andthe redundant attribute set redundant = 119862 minus
CORE119863(119862) = 1205721 1205722 120572120591 (120591 is the number ofattributes in redundant)
(2) If POSreduct(119863) = POS119862(119863) and for all 119877 sube reductPOS119877(119863) =POS119862(119863) then the set reduct is a relativereduct of decision table
(3) Otherwise calculate the significance degree of eachattribute in redundant respectively through the fol-lowing equation
sig (120572119895 reduct 119863) = 120574120572119895cup reduct (119863) minus 120574reduct (119863)
119895 = 1 2 120591
(13)
Journal of Applied Mathematics 5
Dataset
Preprocess and discretize
Constructdecision table
Original decision table
Attributionreduction
Reduced decision table
Train network
Optimum network
Classificationsystem
Extract rules
Build network
Reduction of decision table
Figure 2 The system construction process
The attribute in redundant with the maximum signif-icance degree is marked as 120572max Update the reductand redundant through the following equations
reduct = reduct cup 120572max
redundant = redundant minus 120572max (14)
(4) Go to step (2) until POSreduct(119863) satisfies the condi-tions then output RED119863(119862) = reduct
Therefore the irrelevant features are neglected and theDT is reduced Thus a reduced decision table will be con-structed and can be regarded as the training set to optimizethe structure of RS-BPNN classifier
44 Rough Sets for the Rules Extraction The knowledge inthe trained network is encoded in its connection weights andit is distributed numerically throughout the whole structureof neural network For the knowledge to be useful in thecontext of a classification system it has to be articulated ina symbolic form usually in the form of IF-THEN rules Sinceneural network is a ldquoblack boxrdquo for users the rules which areimplicit in the connection weights are difficult to understandso extracting rules from neural network is extremely toughbecause of the nonlinear and complicated nature of datatransformation conducted in the multiple hidden layers [31]In this paper the decision rules are indirectly extractedfrom the reduced decision table using the value reductionalgorithmof rough sets [32]This algorithm can be elaboratedin the following steps
Step 1 Checking the condition attributes of reduced decisiontable column by column If bringing conflicting objects afterdeleting one column then preserving the original attributevalue of these conflicting objects If not bringing conflictsbut containing duplicate objects then marking this attributevalue of duplicate objects as ldquolowastrdquo marking this attribute valueas ldquordquo for the other objects
Step 2 Deleting the possible duplicate objects and checkingevery object that contains the mark ldquordquo If the decisions canbe determined only by the unmarked attribute values thenldquordquo should be modified as ldquolowastrdquo or else as the original attributevalue If all condition attributes of a particular object aremarked then the attribute values marked with ldquordquo should bemodified as the original attribute value
Step 3 Deleting the objects whose condition attributes aremarked as ldquolowastrdquo and the possible duplicate objects
Step 4 If only one condition attribute value is differentbetween any two objects and this attribute value of one objectis marked as ldquolowastrdquo and for this object if the decision can bedetermined by the unmarked attribute values then deletinganother object or else deleting this object
Each object represents a classification rule in the decisiontable which is value-reduced through the above steps and thenumber of attributes which are not marked as ldquolowastrdquo of everyobject makes up the condition number of this rule Moreoverthis rule set can provide the rational basis for the structure ofBP neural network
45 The Flowchart of the Proposed Approach According tothe above description about the classifier based on integrationalgorithm of rough sets and BP neural network the proposedapproach is an iterative algorithm and can be coded easily ona computer and the flowchart can be summarized as shownin Figure 3
5 Simulation Example
A classifier named RS-BPNN (short for rough sets with BPneural network) has been set up and implemented by VC60 In this section an engineering application of shearerrunning status classification in a coalminewas put forward asa simulation example to verify the feasibility and effectivenessof the proposed approach The classification capabilities ofdifferent types of neural network models were compared andthe proposed approach was proved outperforming others
51 The Construction of Decision Table Due to the poorworking conditions of coal mining there are many parame-ters relating to the running status of shearermainly includingcutting motor current cutting motor temperature tractionmotor current traction motor temperature traction speedtilt angle of shearer body tilt angle of rocker and vibra-tion frequency of rocker transmission gearbox marked as1198621 1198622 1198623 1198624 1198625 1198626 1198627 and 1198628 respectively According tothe information database acquired from the 11070 coal face inthe number 13 Mine of Pingdingshan Coal Industry GroupCo 200 groups datasets of shearer running status wererearranged as shown in Table 1This ldquorunning statusrdquo referred
6 Journal of Applied Mathematics
Dataset
Begin
Yes
Discretizedecision table
Reduce decision table
RS-BPNN
Training set
Classificationsystem
End
Initialize reduct and redundant
Update reduct and redundant
Yes
No
Discretizedecision table
Attributionreduction
Extract rules
Determines l m
Output REDD(C)
120591 = 120591 minus 1
Search 120572max
Calculate CORED(C)
Construct DT = (U C cup D)
Set n = 0
Gbest i isin [1M]
n = n + 1
n le Nmax or120574i lt 120574min
n gt Nmax 120574i ge 120574min
Output Gbest Output xi
Initialize x0i 0i h Nmax 120574min
POSreduct (D) =POSC(D)
Calculate 120574i Pbest and
Update 120596 xi and i
M n c1 c2 r max 120596max and 120596min
Figure 3 The flowchart of the proposed approach
to the cutting status of shearer rocker and could be expressedby the ratio of actual cutting load to rated load
Therefore in decision table DT the condition attributeset and decision attribute set could be expressed as 119862 =
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 and 119863 = 119889 (119889 denoted theshearer running status) The values of decision attributionwere discretized based on the following assumptions for theratio of actual cutting load to rated load of shearer beinggreater than 16 119889 = 4 for the range of this ratio being ininterval of (12 16] 119889 = 3 for the range of this ratio being ininterval of (08 12] 119889 = 2 for this ratio being less than 08119889 = 1 The discretized values of condition attributes could bedetermined through the algorithm described in Section 42
The condition attributes to be discretized were 1198621 sim 1198628so 120583 = 8 Other parameters of PSOwere initialized as follows119872 = 100 119873max = 200 120574min = 09 Vmax = 06 1198881 =
1198882 = 16 119903 is a random number from [0 1] 120596max = 09and 120596min = 04 As the breakpoints number ℎ had greaterinfluence on the dependency degree 120574119862(119863) of discretizeddecision table the comparison of two important parametersdependency degree and simulation time was analyzed whenℎ was assigned various values The comparison results wereshown in Figure 4
From Figure 4 it was observed that the dependencydegree and simulation time both showed ascending tendencywith the increase of breakpoints number ℎ The dependency
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 5
Dataset
Preprocess and discretize
Constructdecision table
Original decision table
Attributionreduction
Reduced decision table
Train network
Optimum network
Classificationsystem
Extract rules
Build network
Reduction of decision table
Figure 2 The system construction process
The attribute in redundant with the maximum signif-icance degree is marked as 120572max Update the reductand redundant through the following equations
reduct = reduct cup 120572max
redundant = redundant minus 120572max (14)
(4) Go to step (2) until POSreduct(119863) satisfies the condi-tions then output RED119863(119862) = reduct
Therefore the irrelevant features are neglected and theDT is reduced Thus a reduced decision table will be con-structed and can be regarded as the training set to optimizethe structure of RS-BPNN classifier
44 Rough Sets for the Rules Extraction The knowledge inthe trained network is encoded in its connection weights andit is distributed numerically throughout the whole structureof neural network For the knowledge to be useful in thecontext of a classification system it has to be articulated ina symbolic form usually in the form of IF-THEN rules Sinceneural network is a ldquoblack boxrdquo for users the rules which areimplicit in the connection weights are difficult to understandso extracting rules from neural network is extremely toughbecause of the nonlinear and complicated nature of datatransformation conducted in the multiple hidden layers [31]In this paper the decision rules are indirectly extractedfrom the reduced decision table using the value reductionalgorithmof rough sets [32]This algorithm can be elaboratedin the following steps
Step 1 Checking the condition attributes of reduced decisiontable column by column If bringing conflicting objects afterdeleting one column then preserving the original attributevalue of these conflicting objects If not bringing conflictsbut containing duplicate objects then marking this attributevalue of duplicate objects as ldquolowastrdquo marking this attribute valueas ldquordquo for the other objects
Step 2 Deleting the possible duplicate objects and checkingevery object that contains the mark ldquordquo If the decisions canbe determined only by the unmarked attribute values thenldquordquo should be modified as ldquolowastrdquo or else as the original attributevalue If all condition attributes of a particular object aremarked then the attribute values marked with ldquordquo should bemodified as the original attribute value
Step 3 Deleting the objects whose condition attributes aremarked as ldquolowastrdquo and the possible duplicate objects
Step 4 If only one condition attribute value is differentbetween any two objects and this attribute value of one objectis marked as ldquolowastrdquo and for this object if the decision can bedetermined by the unmarked attribute values then deletinganother object or else deleting this object
Each object represents a classification rule in the decisiontable which is value-reduced through the above steps and thenumber of attributes which are not marked as ldquolowastrdquo of everyobject makes up the condition number of this rule Moreoverthis rule set can provide the rational basis for the structure ofBP neural network
45 The Flowchart of the Proposed Approach According tothe above description about the classifier based on integrationalgorithm of rough sets and BP neural network the proposedapproach is an iterative algorithm and can be coded easily ona computer and the flowchart can be summarized as shownin Figure 3
5 Simulation Example
A classifier named RS-BPNN (short for rough sets with BPneural network) has been set up and implemented by VC60 In this section an engineering application of shearerrunning status classification in a coalminewas put forward asa simulation example to verify the feasibility and effectivenessof the proposed approach The classification capabilities ofdifferent types of neural network models were compared andthe proposed approach was proved outperforming others
51 The Construction of Decision Table Due to the poorworking conditions of coal mining there are many parame-ters relating to the running status of shearermainly includingcutting motor current cutting motor temperature tractionmotor current traction motor temperature traction speedtilt angle of shearer body tilt angle of rocker and vibra-tion frequency of rocker transmission gearbox marked as1198621 1198622 1198623 1198624 1198625 1198626 1198627 and 1198628 respectively According tothe information database acquired from the 11070 coal face inthe number 13 Mine of Pingdingshan Coal Industry GroupCo 200 groups datasets of shearer running status wererearranged as shown in Table 1This ldquorunning statusrdquo referred
6 Journal of Applied Mathematics
Dataset
Begin
Yes
Discretizedecision table
Reduce decision table
RS-BPNN
Training set
Classificationsystem
End
Initialize reduct and redundant
Update reduct and redundant
Yes
No
Discretizedecision table
Attributionreduction
Extract rules
Determines l m
Output REDD(C)
120591 = 120591 minus 1
Search 120572max
Calculate CORED(C)
Construct DT = (U C cup D)
Set n = 0
Gbest i isin [1M]
n = n + 1
n le Nmax or120574i lt 120574min
n gt Nmax 120574i ge 120574min
Output Gbest Output xi
Initialize x0i 0i h Nmax 120574min
POSreduct (D) =POSC(D)
Calculate 120574i Pbest and
Update 120596 xi and i
M n c1 c2 r max 120596max and 120596min
Figure 3 The flowchart of the proposed approach
to the cutting status of shearer rocker and could be expressedby the ratio of actual cutting load to rated load
Therefore in decision table DT the condition attributeset and decision attribute set could be expressed as 119862 =
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 and 119863 = 119889 (119889 denoted theshearer running status) The values of decision attributionwere discretized based on the following assumptions for theratio of actual cutting load to rated load of shearer beinggreater than 16 119889 = 4 for the range of this ratio being ininterval of (12 16] 119889 = 3 for the range of this ratio being ininterval of (08 12] 119889 = 2 for this ratio being less than 08119889 = 1 The discretized values of condition attributes could bedetermined through the algorithm described in Section 42
The condition attributes to be discretized were 1198621 sim 1198628so 120583 = 8 Other parameters of PSOwere initialized as follows119872 = 100 119873max = 200 120574min = 09 Vmax = 06 1198881 =
1198882 = 16 119903 is a random number from [0 1] 120596max = 09and 120596min = 04 As the breakpoints number ℎ had greaterinfluence on the dependency degree 120574119862(119863) of discretizeddecision table the comparison of two important parametersdependency degree and simulation time was analyzed whenℎ was assigned various values The comparison results wereshown in Figure 4
From Figure 4 it was observed that the dependencydegree and simulation time both showed ascending tendencywith the increase of breakpoints number ℎ The dependency
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Journal of Applied Mathematics
Dataset
Begin
Yes
Discretizedecision table
Reduce decision table
RS-BPNN
Training set
Classificationsystem
End
Initialize reduct and redundant
Update reduct and redundant
Yes
No
Discretizedecision table
Attributionreduction
Extract rules
Determines l m
Output REDD(C)
120591 = 120591 minus 1
Search 120572max
Calculate CORED(C)
Construct DT = (U C cup D)
Set n = 0
Gbest i isin [1M]
n = n + 1
n le Nmax or120574i lt 120574min
n gt Nmax 120574i ge 120574min
Output Gbest Output xi
Initialize x0i 0i h Nmax 120574min
POSreduct (D) =POSC(D)
Calculate 120574i Pbest and
Update 120596 xi and i
M n c1 c2 r max 120596max and 120596min
Figure 3 The flowchart of the proposed approach
to the cutting status of shearer rocker and could be expressedby the ratio of actual cutting load to rated load
Therefore in decision table DT the condition attributeset and decision attribute set could be expressed as 119862 =
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 and 119863 = 119889 (119889 denoted theshearer running status) The values of decision attributionwere discretized based on the following assumptions for theratio of actual cutting load to rated load of shearer beinggreater than 16 119889 = 4 for the range of this ratio being ininterval of (12 16] 119889 = 3 for the range of this ratio being ininterval of (08 12] 119889 = 2 for this ratio being less than 08119889 = 1 The discretized values of condition attributes could bedetermined through the algorithm described in Section 42
The condition attributes to be discretized were 1198621 sim 1198628so 120583 = 8 Other parameters of PSOwere initialized as follows119872 = 100 119873max = 200 120574min = 09 Vmax = 06 1198881 =
1198882 = 16 119903 is a random number from [0 1] 120596max = 09and 120596min = 04 As the breakpoints number ℎ had greaterinfluence on the dependency degree 120574119862(119863) of discretizeddecision table the comparison of two important parametersdependency degree and simulation time was analyzed whenℎ was assigned various values The comparison results wereshown in Figure 4
From Figure 4 it was observed that the dependencydegree and simulation time both showed ascending tendencywith the increase of breakpoints number ℎ The dependency
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 7
0
20
40
60
80
100
120
140
160
180
200
082
084
086
088
09
092
094
096
098
1
1 2 3 4 5
Dependency degreeSimulation time
Dep
ende
ncy
degr
ee
Sim
ulat
ion
time (
s)
Parameter h
Figure 4 The change curves of dependency degree and simulationtime
Figure 5 Attribution reduction
degree was not significantly increased and the simulationtime was obviously increased when ℎ ge 2 So ℎ was consid-ered being equal to 2 and 120574119862(119863) = 098 The breakpoints ofeach attribute value were shown in Table 2 The breakpointswere applied to discretizing the parameter values in Table 1 Ifthe valuewas in interval of (0Br1] the discretized attributionvalue was labeled as 1 if the value was in interval of (Br1Br2)the discretized attribution value was labeled as 2 otherwiseit was labeled as 3 The discretized decision table was shownin Table 3
52 Relative Reduct and Rules Extraction Through the attri-bution reduction algorithm presented in Section 43 thefunction ldquoCORE(119862119863)rdquo was invoked to obtain the relativecore of decision table and CORE119863(119862) = 1198621 1198628 shown inFigure 5 As the minimum relative reduct of decision tableRED119863(119862) = 1198621 1198623 1198625 1198626 1198628 was determined by invokingfunction ldquoReductrdquo
In the module of ldquoRough Set for Attribution Reductionrdquoa new decision table of 200 times 6 was established by theattributions 1198621 1198623 1198625 1198626 1198628 and decision attribution 119889The function ldquoRefresh (DT)rdquo was called to reject the repeatedobjects in this new decision table and a reduced decisiontable was provided finally containing 79 objects as shownin Figure 6 Subsequently the decision rules were extracted
Figure 6 Rules extraction
InputHidden layer 1 Hidden layer 2 Output layer
Output
5
5 5 1
1
120596
b
120596
b
120596
b
+++
Figure 7 The structure of RS-BPNN classifier
through function ldquoExtract rulesrdquo which was based on thevalue reduction algorithm and 25 rules were obtained asshown in Figure 6The extracted rules could reveal the poten-tial rules (knowledge) in dataset for example the first ruleldquo123lowast11rdquo in Figure 6 illustrated that if cutting motor current1198621 isin (0 4856] traction motor current 1198623 isin (8237 9542)traction speed 1198625 isin [324 +infin) tilt angle of shearer body1198626 was arbitrary value and vibration frequency of rockertransmission gearbox 1198628 isin (0 65128] then the runningstatus class was 1 which was in accordance with engineeringpractice situation Moreover RS-BPNN classifier in fact wasto diagnose the status of objects according to these potentialrules (knowledge) and the structure of network could beconstructed reasonably based on the number of rules so asto enhance the classification accuracy
53The Structure and Testing of RS-BPNNClassifier Becausethe dataset obtained 25 rules (knowledge) the RS-BPNNshould be constructed with double hidden layers and thenumber of neurons was 5times5 which must completely containthese rules (knowledge) The number of input layer notes119904 = 5 the number of output layer notes 119898 = 1 Otherparameters of network were determined as follows 120577max =
09 120577min = 04 119905max = 500 the initial connection weightswere assigned somewhat random valuesThe structure of RS-BPNN classifier was shown in Figure 7
The reduced decision table as the training set waspresented to the network several times The testing setcomposed of 40 samples extracted from the informationdatabase randomly was not seen by the network duringthe training phase and it was only used for testing thegeneralization of neural network after it was trained as shownin Table 4
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Journal of Applied Mathematics
Table 1 The running status of shearer and its corresponding parameters
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) Running status1 4512 7874 6546 8816 351 252 4512 62885 0782 4819 7522 7078 8428 342 362 5322 68346 0793 5025 8019 8514 9125 288 536 4022 70576 0824 4526 8243 6245 8415 312 285 4936 64214 0765 6428 7716 9045 9141 256 226 5855 82329 1226 4622 8524 6855 8046 326 286 5846 71124 0837 7256 8512 10242 10236 276 1026 3841 88125 1178 6233 7424 9246 8446 356 826 6027 82146 1129 4423 8919 6315 8578 382 322 4625 57826 07510 8923 8124 12036 8614 201 105 5036 103548 163
198 4922 7524 7122 8025 346 426 4526 54216 072199 6865 8716 11864 9046 466 436 4935 84529 136200 5022 7815 9046 9544 322 722 4421 68264 111
Table 2 The corresponding breakpoints of each attribute value at ℎ = 2
Breakpoints 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz)Br1 4856 8149 8237 8479 241 416 4735 65128Br2 6525 8578 9542 9025 324 738 5425 84746
Table 3 The discretized decision table
119880
119862 119863
1198621 1198622 1198623 1198624 1198625 1198626 1198627 1198628 119889
1 1 1 1 2 3 1 1 1 12 1 1 1 1 3 1 2 2 13 2 1 2 3 2 2 1 2 24 1 2 1 1 2 1 2 1 15 3 1 2 3 2 1 3 2 36 1 2 1 1 3 1 3 2 27 3 2 3 3 2 3 1 3 28 2 1 2 1 3 3 3 2 29 1 3 1 2 3 1 1 1 210 3 1 3 2 1 1 2 3 4
198 2 1 1 1 3 2 1 1 1199 3 3 3 3 3 2 2 2 2200 2 1 2 3 2 2 1 2 2
After the training phase the RS-BPNN classifier wasobtained In order to test the performance of this classifierexpediently a testing interface was designed as shown inFigure 8 The testing set was imported with the ldquoImportsamplesrdquo button and was processed with the functions ofldquoDiscretizerdquo and ldquoReduce attributionrdquo The prediction resultsof RS-BPNN classifier were output through the function ofldquoClassifyrdquo in Figure 8 The contrast of prediction class andactual class was shown brightly in Figure 9
When the classification accuracy of RS-BPNN was com-puted in Figure 8 the output of classifierwas impossibly equal
Figure 8 The testing interface of RS-BPNN classifier
to desired output Therefore it allowed the output to differfrom the desired value If the difference between classifieroutput and required value of decision was less than somepresent value it was regarded as a correct one This tolerancemargin was decided during simulation evaluation and finallyit was 5 of the desired value Seen from Figures 8 and9 the classification accuracy was 9000 and the averageclassification error was 202 The present model showedhigher accuracy and lower error for the classification ofshearer running status The testing results showed that theproposed classifier was proved to be satisfactory and couldbe used in engineering application in the future
54 Discussion To evaluate the classification capabilitiesof different types of neural network models (WNN shortfor wavelet neural network RS-WNN short for rough setscouplingwithwavelet neural network BP-NN short for back-propagation neural network and RS-BPNN short for rough
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 9
Table 4 The testing samples for the classifier
Samples 1198621 (A) 1198622 (∘C) 1198623 (A) 1198624 (
∘C) 1198625 (mmin) 1198626 (∘) 1198627 (
∘) 1198628 (Hz) 119889
1 7524 7138 12845 8546 200 102 5245 103844 1652 8946 7825 11046 8057 286 215 4921 99246 1783 5546 8019 8114 9025 289 336 5622 81076 1084 7526 8243 6245 8415 312 355 5246 90047 1395 5628 7816 10045 9041 256 826 5855 52329 0686 4925 8624 6855 8046 326 306 6046 71124 0727 6056 8012 11242 9236 276 748 4841 87125 0928 7033 7024 9646 8046 376 326 6027 92146 1229 7423 8519 11315 7978 232 225 5625 100046 16610 8923 7624 12036 8514 331 425 5546 103548 163
38 5846 7424 9022 8025 246 326 5526 84916 13539 6502 8716 11864 9046 506 924 4826 102529 17540 5022 8515 8125 8944 334 346 4425 67264 072
Table 5 Classification performance of the four models of neural networks
The models Training accuracy () Testing accuracy () Classification error () Classification time (s)WNN 9000 8250 356 2322RS-WNN 9250 8750 266 1848BP-NN 9000 8500 286 2226RS-BPNN 9500 9000 202 1833
Table 6 Classification results of the four methods
The methods Classification accuracy ()Class I Class II Class III
WNN 9333 8667 8000RS-WNN 100 9333 8667BP-NN 9333 8000 8667RS-BPNN 100 9333 100
sets coupling with back-propagation neural network) theirbest architecture and effective training parameters should befound In order to obtain the bestmodel both inputs and out-puts should be normalized The number of nodes in the hid-den layer of WNN and RS-WNNwas equal to that of waveletbase If the number was too small WNNRS-WNN may notreflect the complex function relationship between input dataand output value On the contrary a large numbermay createsuch a complex network thatmight lead to a very large outputerror caused by overfitting of the training sample set
Therefore a search mechanism was needed to find theoptimal number of nodes in the hidden layer for WNN andRS-WNNmodels In this study various numbers of nodes inthe hidden layer had been checked to find the best oneWNNand RS-WNN yielded the best results when the number ofhidden nodes was 8
In this subsection WNN RS-WNN BP-NN and RS-BPNN were provided to solve the problem of above simula-tion example The configurations of simulation environmentfor four algorithms were uniform and in common with
above simulation example Initially for each network theconnection weights were assigned somewhat random valuesIn order to avoid the random error the training set ofinput was presented to the networks 100 times and theaverage values were calculatedThe training accuracy testingaccuracy classification error and classification time of fouralgorithms were shown in Table 5
From the table it was observed that the proposed modelhad a better classification capability and better performancethan other models of neural networks in predicting thenonlinear dynamic and chaotic behaviors in the datasetand the proposed model was proved outperforming othersFurthermore the comparison results were suggesting thatthe proposed model represents a new good method forclassification and decision making and the new method canbe treated as a promising tool for extracting rules from thedataset in industrial fields
In order to illustrate the superiority of proposed methodIris data set [23] as a benchmark data set fromUCI databasewas used to verify the classifiers based on above four typesof neural networks and the compared results were shown inTable 6 The sample data had been reduced firstly based onattribution reduction algorithm and then they were used totrain neural networks so the networks coupling with roughsets (RS-WNN and RS-RBNN) could acquire better classifi-cation accuracy than signal networks (WNNandBPNN)Therules were extracted through value reduction algorithm toconstruct RS-BPNN structure so the classification accuracyof classifier based on RS-BPNN was higher than that of RS-WNN classifier
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Journal of Applied Mathematics
0 5 10 15 20 25 30 35 40
1
2
3
4
45
35
25
15
05
5
Testing samples
Runn
ing
stat
us cl
ass
Actual classPrediction class
(a)
0 5 10 15 20 25 30 35 400
1
2
3
4
5
6
Testing samples
Clas
sifica
tion
erro
r (
)
(b)
Figure 9 The contrast of prediction class and actual class
6 Conclusions
This paper proposed a novel classifier model based onBP neural network and rough sets theory to be appliedin nonlinear systems The decision table was constructedand discretized reasonably through the PSO algorithm Thedecision rules were extracted by the use of value reductionalgorithmwhich provided the basis for the network structureIn order to verify the feasibility and superiority the proposedapproach was applied to a classification problem of anindustrial example The results of comparison simulationsshowed that the proposed approach could generate moreaccurate classification and cost less classification time thanother neural network approaches and the rough sets basedapproach The classification performance of proposed modeldemonstrated that this method could be extended to processother types of classification problems such as gear faultidentification lithology recognition and coal-rock interfacerecognition
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgments
The supports of National High Technology Research andDevelopment Program of China (no 2013AA06A411)National Natural Science Foundation of China (no51005231) Xuyi Mine Equipment and Materials RampDCenter Innovation Fund Project (no CXJJ201306) andJiangsu Province National Engineering Research CenterCultivation Base for Coal Intelligent Mining Equipment incarrying out this research are gratefully acknowledged
References
[1] W Zhu and F Wang ldquoOn three types of covering-based roughsetsrdquo IEEE Transactions on Knowledge and Data Engineeringvol 19 no 8 pp 1131ndash1143 2007
[2] Z Huang Z Yu and Y Zheng ldquoSimple method for predictingdrag characteristics of the wells turbinerdquoChinaOcean Engineer-ing vol 20 no 3 pp 473ndash482 2006
[3] S G Sklavos and A K Moschovakis ldquoNeural network sim-ulations of the primate oculomotor system IV A distributedbilateral stochastic model of the neural integrator of the verticalsaccadic systemrdquo Biological Cybernetics vol 86 no 2 pp 97ndash109 2002
[4] I G Belkov I N Malyshev and S M Nikulin ldquoCreating amodel of passive electronic components using a neural networkapproachrdquo Automation and Remote Control vol 74 no 2 pp275ndash281 2013
[5] J S Chen ldquoNeural network-based modelling and thermally-induced spindle errorsrdquo International Journal of AdvancedManufacturing Technology vol 12 no 4 pp 303ndash308 1996
[6] A D Cartwright I S Curthoys and D P D GilchristldquoTestable predictions from realistic neural network simulationsof vestibular compensation integrating the behavioural andphysiological datardquo Biological Cybernetics vol 81 no 1 pp 73ndash87 1999
[7] B Zou X Y Liao Y N Zeng and L X Huang ldquoAn improvedBP neural network based on evaluating and forecasting modelof water quality in Second Songhua River of Chinardquo ChineseJournal of Geochemistry vol 25 no 1 pp 167ndash170 2006
[8] Y X Shao Q Chen and D M Zhang ldquoThe application ofimproved BP neural network algorithm in lithology recogni-tionrdquo in Advances in Computation and Intelligence vol 5370 ofLecture Notes in Computer Science pp 342ndash349 2008
[9] W H Zhou and S Q Xiong ldquoOptimization of BP neuralnetwork classifier using genetic algorithmrdquo in Intelligence Com-putation and Evolutionary Computation vol 180 of Advances inIntelligent Systems and Computing pp 599ndash605 2013
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Applied Mathematics 11
[10] W Duch R Setiono and J M Zurada ldquoComputational intelli-gence methods for rule-based data understandingrdquo Proceedingsof the IEEE vol 92 no 5 pp 771ndash805 2004
[11] Z Pawlak ldquoRough setsrdquo International Journal of Computer andInformation Sciences vol 11 no 5 pp 341ndash356 1982
[12] Y Wu and S J Wang ldquoMethod for improving classificationperformance of neural network based on fuzzy input andnetwork inversionrdquo Journal of Infrared and Millimeter Wavesvol 24 no 1 pp 15ndash18 2005
[13] Z Pawlak ldquoSome issues on rough setsrdquo in Transactions onRough Sets I vol 3100 of Lecture Notes in Computer Science pp1ndash58 2004
[14] Z Pawlak and A Skowron ldquoRough sets some extensionsrdquoInformation Sciences vol 177 no 1 pp 28ndash40 2007
[15] Y Hassan E Tazaki S Egawa and K Suyama ldquoDecision mak-ing using hybrid rough sets and neural networksrdquo InternationalJournal of Neural Systems vol 12 no 6 pp 435ndash446 2002
[16] V L Berardi B E Patuwo andMYHu ldquoAprincipled approachfor building and evaluating neural network classification mod-elsrdquo Decision Support Systems vol 38 no 2 pp 233ndash246 2004
[17] H Ishibuchi K Kwon and H Tanaka ldquoA learning algorithmof fuzzy neural networks with triangular fuzzy weightsrdquo FuzzySets and Systems vol 71 no 3 pp 277ndash293 1995
[18] Y Hayashi J J Buckley and E Czogala ldquoFuzzy neural networkwith fuzzy signals and weightsrdquo International Journal of Intelli-gent Systems vol 8 no 4 pp 527ndash537 1993
[19] C C Hu D Y Wang K J Wang and J Cable ldquoSeasonalroadway classification based on neural networkrdquo Journal ofSouth China University of Technology (Natural Science) vol 37no 11 pp 22ndash26 2009
[20] A Subasi M Yilmaz andH R Ozcalik ldquoClassification of EMGsignals using wavelet neural networkrdquo Journal of NeuroscienceMethods vol 156 no 1-2 pp 360ndash367 2006
[21] D Sahoo and G S Dulikravich ldquoEvolutionary wavelet neuralnetwork for large scale function estimation in optimizationrdquo inProceedings of the 11th AIAAISSMO Multidisciplinary Analysisand Optimization Conference pp 658ndash668 Portsmouth VaUSA September 2006
[22] J R Zhang J Zhang T M Lok and M R Lyu ldquoA hybridparticle swarm optimization-back-propagation algorithm forfeedforward neural network trainingrdquoAppliedMathematics andComputation vol 185 no 2 pp 1026ndash1037 2007
[23] W Zhang Y B Shi L F Zhou and T Lu ldquoWavelet neuralnetwork classifier based on improved PSO algorithmrdquo ChineseJournal of Scientific Instrument vol 31 no 10 pp 2203ndash22092010
[24] A Sengur I Turkoglu and M C Ince ldquoWavelet packetneural networks for texture classificationrdquo Expert Systems withApplications vol 32 no 2 pp 527ndash533 2007
[25] Y F Hassan ldquoRough sets for adapting wavelet neural networksas a new classifier systemrdquoApplied Intelligence vol 35 no 2 pp260ndash268 2011
[26] W Ziarko ldquoThe discovery analysis and representation ofdata dependencies in databasesrdquo in Knowledge Discovery inDatabases pp 213ndash228 1993
[27] Y Hassan and E Tazaki ldquoEmergent rough set data analysisrdquo inTransactions on Rough Sets II Rough Sets and Fuzzy Sets J FPeters Ed pp 343ndash361 Springer Berlin Germany 2005
[28] D B Chen and C X Zhao ldquoParticle swarm optimizationwith adaptive population size and its applicationrdquo Applied SoftComputing Journal vol 9 no 1 pp 39ndash48 2009
[29] K Thangavel and A Pethalakshmi ldquoDimensionality reductionbased on rough set theory a reviewrdquo Applied Soft ComputingJournal vol 9 no 1 pp 1ndash12 2009
[30] J M Wei ldquoRough set based approach to selection of noderdquoInternational Journal of Computational Cognition vol 1 no 2pp 25ndash40 2003
[31] E Xu L S Shao Y Z Zhang F Yang andH Li ldquoMethod of ruleextraction based on rough set theoryrdquo Chong Qing ComputerScience vol 38 pp 232ndash235 2011
[32] L Y Chang G Y Wang and Y Wu ldquoAn approach for attributereduction and rule generation based on rough set theoryrdquo BeiJing Journal of Software vol 10 pp 1206ndash1211 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of