smart sensor/actuator node reprogramming in changing

10
Smart sensor/actuator node reprogramming in changing environments using a neural network model Francisco Ortega-Zamorano a,n , José M. Jerez a , José L. Subirats a , Ignacio Molina b , Leonardo Franco a a Department of Computer Science, E.T.S.I. Informatica, University of Malaga, Bulevar Louis Pasteur, 35, 29071 Malaga, Spain b Max Planck Institute, Munich, Germany article info Article history: Received 17 September 2013 Received in revised form 23 December 2013 Accepted 9 January 2014 Available online 1 February 2014 Keywords: Constructive Neural Networks Microcontroller Arduino abstract The techniques currently developed for updating software in sensor nodes located in changing environments require usually the use of reprogramming procedures, which clearly increments the costs in terms of time and energy consumption. This work presents an alternative to the traditional reprogramming approach based on an on-chip learning scheme in order to adapt the node behaviour to the environment conditions. The proposed learning scheme is based on C-Mantec, a novel constructive neural network algorithm especially suitable for microcontroller implementations as it generates very compact size architectures. The Arduino UNO board was selected to implement this learning algorithm as it is a popular, economic and efcient open source single-board microcontroller. C-Mantec has been successfully implemented in a microcontroller board by adapting it in order to overcome the limitations imposed by the limited resources of memory and computing speed of the hardware device. Also, this work brings an in-depth analysis of the solutions adopted to overcome hardware resource limitations in the learning algorithm implementation (e.g., data type), together with an efciency assessment of this approach when the algorithm is tested on a set of circuit design benchmark functions. Finally, the utility, efciency and versatility of the system is tested in three different-nature case studies in which the environmental conditions change its behaviour over time. & 2014 Elsevier Ltd. All rights reserved. 1. Introduction Sensors (or detectors) are devices that permit the measurement of chemical or physical variables, transforming them into electrical signals, in order to interpret the signals from various sensors and send the sensed data or make a decision according to them. Microcontroller boards are an economic, small and exible solution, and thus are the most common controller used in a sensor node, also known as a Mote, commonly used in Wireless sensor network (Yick et al., 2008; Sengupta et al., 2013), but also used in other important technologies such as Embedded systems (Marwedel, 2006; Mamdoohi et al., 2012) and Real-time systems (Kopetz, 1997; Wang et al., 2010). Motes are nowadays widely employed in all kind of industrial applications, in several of them, the problem requires an action upon the environmental conditions, in this case a sensor/ actuator node is required. In cases when the environmental conditions evolve over time, the original sensor/actuator programming can lead to incorrect decisions, and thus it is necessary to change or adapt the decision-making process to the new conditions (Sayed-Mouchaweh and Lughofer, 2012). The traditional option to solve this problem has been to send the sensed data to a central unit, where a person interprets the data and reprogram the microcontroller with the new set of rules (Han et al., 2005; Wang et al., 2006; Shaikh et al., 2010). Different reprogramming techniques have been proposed as a way of dyna- mically changing the behaviour of the sensors without having to manually reprogram them, because traditional reprogramming requires in most cases the interruption of the process for loading the new binary code, with the consequent loss of time and energy, involved in the communication process to the central unit (Rassam et al., 2013; Aiello et al., 2011). A rst step towards reducing the previous effects has been to incorporate machine learning systems in the decision-making process, automating the response of the micro- controller without interrupting its execution and sending just a small fraction of code to the microcontroller (Urda et al., 2012; Canete et al., 2012; Farooq et al., 2010). However, recent advances in the comput- ing power of sensors permit the inclusion of learning systems in the microcontroller (on-chiplearning), adapting the sensor/actuator behaviour dynamically according to the sensed data (Aleksendrić et al., 2012; Mahmoud et al., 2013). Articial Neural Networks (ANNs) (Haykin, 1994) are a kind of machine learning models, inspired on the functioning of the brain, Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/engappai Engineering Applications of Articial Intelligence 0952-1976/$ - see front matter & 2014 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.engappai.2014.01.006 n Corresponding author. Tel.: þ34 952 13 28 47; fax: þ34 952 131 397. E-mail addresses: [email protected], [email protected] (F. Ortega-Zamorano). Engineering Applications of Articial Intelligence 30 (2014) 179188

Upload: truongnhan

Post on 14-Feb-2017

227 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Smart sensor/actuator node reprogramming in changing

Smart sensor/actuator node reprogramming in changing environmentsusing a neural network model

Francisco Ortega-Zamorano a,n, José M. Jerez a, José L. Subirats a, Ignacio Molina b,Leonardo Franco a

a Department of Computer Science, E.T.S.I. Informatica, University of Malaga, Bulevar Louis Pasteur, 35, 29071 Malaga, Spainb Max Planck Institute, Munich, Germany

a r t i c l e i n f o

Article history:Received 17 September 2013Received in revised form23 December 2013Accepted 9 January 2014Available online 1 February 2014

Keywords:Constructive Neural NetworksMicrocontrollerArduino

a b s t r a c t

The techniques currently developed for updating software in sensor nodes located in changing environmentsrequire usually the use of reprogramming procedures, which clearly increments the costs in terms of timeand energy consumption. This work presents an alternative to the traditional reprogramming approachbased on an on-chip learning scheme in order to adapt the node behaviour to the environment conditions.The proposed learning scheme is based on C-Mantec, a novel constructive neural network algorithmespecially suitable for microcontroller implementations as it generates very compact size architectures. TheArduino UNO board was selected to implement this learning algorithm as it is a popular, economic andefficient open source single-board microcontroller. C-Mantec has been successfully implemented in amicrocontroller board by adapting it in order to overcome the limitations imposed by the limited resources ofmemory and computing speed of the hardware device. Also, this work brings an in-depth analysis of thesolutions adopted to overcome hardware resource limitations in the learning algorithm implementation (e.g.,data type), together with an efficiency assessment of this approach when the algorithm is tested on a set ofcircuit design benchmark functions. Finally, the utility, efficiency and versatility of the system is tested inthree different-nature case studies in which the environmental conditions change its behaviour over time.

& 2014 Elsevier Ltd. All rights reserved.

1. Introduction

Sensors (or detectors) are devices that permit the measurementof chemical or physical variables, transforming them into electricalsignals, in order to interpret the signals from various sensors andsend the sensed data or make a decision according to them.Microcontroller boards are an economic, small and flexible solution,and thus are the most common controller used in a sensor node, alsoknown as a Mote, commonly used in Wireless sensor network (Yicket al., 2008; Sengupta et al., 2013), but also used in other importanttechnologies such as Embedded systems (Marwedel, 2006;Mamdoohi et al., 2012) and Real-time systems (Kopetz, 1997;Wang et al., 2010). Motes are nowadays widely employed in all kindof industrial applications, in several of them, the problem requires anaction upon the environmental conditions, in this case a sensor/actuator node is required.

In cases when the environmental conditions evolve over time, theoriginal sensor/actuator programming can lead to incorrect decisions,and thus it is necessary to change or adapt the decision-making

process to the new conditions (Sayed-Mouchaweh and Lughofer,2012). The traditional option to solve this problem has been to sendthe sensed data to a central unit, where a person interprets the dataand reprogram the microcontroller with the new set of rules (Hanet al., 2005; Wang et al., 2006; Shaikh et al., 2010). Differentreprogramming techniques have been proposed as a way of dyna-mically changing the behaviour of the sensors without having tomanually reprogram them, because traditional reprogrammingrequires in most cases the interruption of the process for loadingthe new binary code, with the consequent loss of time and energy,involved in the communication process to the central unit (Rassamet al., 2013; Aiello et al., 2011). A first step towards reducing theprevious effects has been to incorporate machine learning systems inthe decision-making process, automating the response of the micro-controller without interrupting its execution and sending just a smallfraction of code to the microcontroller (Urda et al., 2012; Canete et al.,2012; Farooq et al., 2010). However, recent advances in the comput-ing power of sensors permit the inclusion of learning systems in themicrocontroller (“on-chip” learning), adapting the sensor/actuatorbehaviour dynamically according to the sensed data (Aleksendrićet al., 2012; Mahmoud et al., 2013).

Artificial Neural Networks (ANNs) (Haykin, 1994) are a kind ofmachine learning models, inspired on the functioning of the brain,

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/engappai

Engineering Applications of Artificial Intelligence

0952-1976/$ - see front matter & 2014 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.engappai.2014.01.006

n Corresponding author. Tel.: þ34 952 13 28 47; fax: þ34 952 131 397.E-mail addresses: [email protected],

[email protected] (F. Ortega-Zamorano).

Engineering Applications of Artificial Intelligence 30 (2014) 179–188

Page 2: Smart sensor/actuator node reprogramming in changing

that can be utilised in clustering and classification problems,having been applied successfully in several fields, includingpattern recognition (Dhanalakshmi et al., 2011), stock marketprediction (Park and Shin, 2013), control tasks (Zhai and Yu,2009), medical diagnosis and prognosis (Kodogiannis et al.,2007), and so on. Despite years of research in the field of ANN,selecting a proper architecture for a given problem remains adifficult task (Gómez et al., 2009; Hunter et al., 2012; Lakshmi andSubadra, 2013), and several strategies have been proposed forsolving or alleviating this issue. In particular, Constructive NeuralNetworks (CoNNs) offer the possibility of generating networksthat grows as input information is received, matching the com-plexity of the data (Franco et al., 2009). Moreover, the trainingprocedure in CoNN, considered a computationally expensiveproblem in standard feedforward neural networks, can be doneon-line and relatively fast. C-Mantec is a recently introduced CoNNalgorithm that implements competition between neurons, alsoincorporating a built-in filtering scheme to avoid overfittingproblems. These two characteristics permit the algorithm togenerate compact neural architectures with very good general-isation capabilities, making the algorithm suitable for its applica-tion to devices with limited resources like microcontrollers. Themain limitations of these devices are memory size and computingspeed, and thus an efficient implementation of the algorithm isneeded. Despite the existence of alternative evolving models(Lughofer, 2011; Angelov, 2010; Huang et al., 2005), C-Mantechas been selected based mainly on the three following features:dynamic generation of compact architectures, good predictionability and robustness to parameter setting.

In the present work, we have fully implemented the C-Mantec(Subirats et al., 2012) constructive neural network model in anArduino UNO board, including the whole learning process toimplement the automatic reprogramming process for decision-making into the sensor/actuator in changing environments, avoid-ing communication to other devices.

The Arduino UNO board was used (Oxer and Blemings, 2009) asit is a popular, economic and efficient open source single-boardmicrocontroller that allows easy project development (Lian et al.,2013; Cela et al., 2013; Ortega-Zamorano et al., 2013; Kornuta et al.,2013). We have also proposed three case studies of different naturewhich require reprogramming to the decision-making process,demonstrating that the time involved in the sensor reprogrammingis significantly lower than in the traditional case, without the needto send any information to another device (as a control unit), savinga large fraction of the required energy resources.

The paper is structured as follows: first, we briefly describe inSection 2 the C-Mantec constructive neural network algorithmused, followed by a description of the Arduino UNO microcon-troller board in Section 3. Section 4 includes the details of theimplementation with the results obtained shown in Section 5.Thereafter, three case studies are evaluated in Section 6, checkingthe efficiency of the system in them, to finally extract theconclusions in Section 7.

2. C-Mantec, constructive neural network algorithm

C-Mantec (Subirats et al., 2012) (Competitive Majority NetworkTrained by Error Correction) is a novel neural network constructivealgorithm that utilises competition between neurons and a modifiedperceptron learning rule (thermal perceptron Frean, 1990) to buildsingle hidden layer compact architectures with good predictioncapabilities for supervised classification problems. As a CoNN algo-rithm, C-Mantec generates the network topology on-line during thelearning phase, avoiding the complex problem of selecting an ade-quate neural architecture. The novelty of C-Mantec in comparison to

previous proposed constructive algorithms is that the neurons in thesingle hidden layer compete for learning the incoming data, and thisprocess permits the creation of very compact neural architectures. Thebinary activation state (S) of the neurons in the hidden layer dependson N input signals, ψi, and on the actual value of the N synapticweights (ωi) and bias (b) as follows:

S¼1ðONÞ if hZ00ðOFFÞ otherwise

(ð1Þ

where h is the synaptic potential of the neuron defined as

h¼ ∑N

i ¼ 0ωiψ i ð2Þ

In the thermal perceptron rule, the modification of the synapticweights, Δωi, is done on-line (after the presentation of a single inputpattern) according to the following equation:

Δωi ¼ ðt�SÞψ iT fac; ð3Þwhere t is the target value of the presented input, and ψ representsthe value of input unit i connected to the output by weight ωi. Thedifference in the standard perceptron learning rule is that the thermalperceptron incorporates the Tfac factor. This factor, whose value iscomputed as shown in Eq. (4), depends on the value of the synapticpotential and on an artificially introduced temperature (T).

Tfac ¼TT0

e�jhj=T ; ð4Þ

The value of T decreases as the learning process advancesaccording to Eq. (5), similar to a simulated annealing process.

T ¼ T0 � 1� IImax

� �; ð5Þ

where I is a cycle counter that defines an iteration of the algorithmon one learning cycle, and Imax is the maximum number ofiterations allowed. One learning cycle of the algorithm is theprocess that starts when a random chosen pattern is presented tothe network and finishes after checking that the output of thenetwork is equal to the target for this pattern, or when a chosenneuron (the neuron with largest Tfac value or a new added neuron)modifies its synaptic weights to learn the actual presented pattern.

The C-Mantec algorithm has three parameters to be set at thetime of starting the learning procedure, and several experimentshave shown the robustness of the algorithm that operates fairlywell in a wide range of parameter values. The algorithm has thefollowing three parameters:

� Imax: Maximum number of learning iterations allowed for eachneuron in one learning cycle.

� gfac: Growing factor that determines when to stop a learningcycle and includes a new neuron in the hidden layer.

� ϕ: Determines in which case an input example is considered asnoise and removed from the training dataset according to thefollowing condition:

deleteðxiÞ∣NLT Z ðμþϕsÞ; ð6Þ

where xi represents an input pattern, N is the total number ofpatterns in the dataset, NLT is the number of times that pattern xihas been presented to the network on the current learning cycle,and where μ and s correspond to the mean and variance of thedistribution for all patterns on the number of times that thealgorithm has tried to learn each pattern in a learning cycle. Thelearning procedure starts with one neuron present in the singlehidden layer of the architecture and an output neuron thatcomputes the majority function of the responses of the hidden

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188180

Page 3: Smart sensor/actuator node reprogramming in changing

neurons (a voting scheme). The process continues by presentingan input pattern to the network and if it is misclassified, it will belearned by one of the present neurons whose output did notmatch the target pattern value if certain conditions are met,otherwise a new neuron will be included in the architecture tolearn it. Among all neurons that misclassified the input pattern,the one with the largest Tfac will learn it but only if this Tfac value islarger than the gfac parameter of the algorithm, a conditionincluded to prevent the unlearning of previous stored information.If no thermal perceptron meeting these criteria are found, a newneuron is added to the network, starting a new learning cycle thatincludes the resetting of all neurons temperature to T0. Also at theend of a cycle the noisy patterns filtering procedure (Eq. (6)) isapplied. The algorithm continues its operation iteratively repeat-ing the previous stages until all patterns in the training set arecorrectly classified by the network. During the learning processcatastrophic forgetting is prevented as synaptic weights are onlymodified if the change involved is small (controlled by the value ofgfac and by an annealing process that reduces the temperature aslearning proceeds), as if this is not the case the algorithmintroduces a new neuron in the architecture (Subirats et al., 2012).

The Flowchart of C-Mantec algorithm is shown in Fig. 1, themost relevant function is represented in boxes, the decision-making in diamonds and the most important states (start andfinish) in ovals.

3. The Arduino UNO board

Arduino is a single-board microcontroller designed to makethe process of using electronics in multidisciplinary projects

more accessible. The hardware consists of a simple open sourcehardware board designed around an 8-bit Atmel AVR microcon-troller, though a new model has been designed around a 32-bitAtmel ARM. The software consists of a standard programminglanguage compiler and a boot loader that executes on themicrocontroller.

Arduino is a descendant of the open-source Wiring platformand is programmed using a Wiring-based language (syntax andlibraries); similar to Cþþ with some slight simplifications andmodifications, and a processing-based integrated developmentenvironment. Arduino boards can be purchased pre-assembledor do-it-yourself kits, and hardware design information is avail-able. The maximum length and width of the Arduino UNO boardare 6.8 and 5.3 cm respectively, with the USB connector and powerjack extending beyond the former dimension.

The Arduino UNO is based on the ATmega328 chip (Atmel,Datasheet 328). It has 14 digital input/output pins, which can beused as input or outputs, and in addition, has some pins forspecialised functions, for example 6 digital pins can be used asPWM outputs. It also has 6 analogy inputs, each of which provide10 bits of resolution, together with a 16 MHz ceramic resonator,USB connection with serial communication, a power jack, an ICSPheader, and a reset button. The ATmega328 chip has 32 KB ofmemory storage (0.5 KB are used for the bootloader). It also has2 KB of SRAM and 1 KB of EEPROM. A picture of the Arduino UNOboard is shown in Fig. 2.

The Arduino UNO has a communication protocol for its inter-action with a computer, with another Arduino board or othermicrocontrollers. The ATmega328 provides serial communication(UART) which is available on digital pins 0 (RX) and 1 (Tx), also hasI2C and SPI communication.

4. Implementation of the C-Mantec algorithm

The neural network model comprises two phases (learning andexecution). In the learning phase, the synaptic weights for theneural network are computed from a set of patterns stored in thememory of the microcontroller, while in the execution phase, themicrocontroller obtains the response to sensed input data accord-ing to the previously learned model. The learning phase comprisestwo different states (loading of input patterns and neural networklearning). Data can be loaded into the microcontroller EEPROMmemory on-line by I/O pins or by a serial communication USBport, but in both cases, the patterns have to be stored into theEEPROM memory, after, the neural network learning state starts.

Fig. 1. Flow diagram of the C-Mantec constructive learning algorithm.Fig. 2. Picture of an Arduino UNO board used for the implementation of theC-Mantec algorithm.

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188 181

Page 4: Smart sensor/actuator node reprogramming in changing

We explain next, the main technical issues considered for theimplementation of the algorithm according to the two learningmodes mentioned before:

4.1. Loading of patterns

The patterns have to be stored in the memory microcontrollerbecause the learning process works in cycles and uses the pattern setrepeatedly. For Boolean functions, it is only necessary tostore the true value function outputs because the inputs are repre-sented by the memory position. For example, for the pattern“01101001”- ‘0’, the input “01101001” corresponds to the decimalnumber 105 and thus the value ‘0’ is stored in the position of memory105. The microcontroller has 1 KB of EPROM memory, i.e., 8192 bits(213) limiting the number of Boolean inputs to 13. For the case of usingan incomplete truth table, because the noisy pattern elimination stageis used or because of the nature of the function, the memory is dividedinto two parts, a first one that corresponds to the function outputs anda second part to indicate the inclusion or not of a given pattern in thelearning set. In the case of using an incomplete truth table themaximum size of the input dimension is reduced to 12.

For the case of using real-value patterns is necessary to know inadvance the actual number of bits that are used to represent eachinput variable. Eight bits have been used to represent eachvariable, taking into account that these values have to be normal-ised between 0 and 255. The following equation permits tocompute the maximum number of input patterns:

NP � NIþNP=8r1024; ð7Þwhere NI is the number of inputs and NP is the number of patterns. NP

depends on the number of entries and the number of bits used foreach entry.

4.2. Neural network learning

C-Mantec is an algorithm which adds neurons when they arebecome necessary, action that is not easily implemented inmicrocontroller, because the use of memory is static so themaximum number of neurons, which are stored in SRAM, mustbe previously defined. From this memory, with a capacity of 2 KB,we will employ less than 1 KB for storing the variables of theprogram; and thus saving at least 1 KB of free memory for savingthe variables related to the neurons.

The microcontrollers are devices with limited computing speed sofor obtaining more velocity in the learning process we have changedthe data type of the variables associated to neurons. The floating pointrepresentation is the usual data type used in this kind of algorithm torepresent all variables but this representation is not the mostefficiency. We have selected fixed point to represent the variablesassociated to neurons for this we have to change the data type of thesevariables to integer. This paradigm shift has incited to essentialchanges in the way to program this algorithm but in return a higherlearning speed and a smaller size of each variable has been obtained.

Regarding the variables associated to neurons, it is given nextsome details about the representation used:

� Tfac: Represented as a float type and occupying 4 bytes.� Number of iterations: An integer value with a range between

1000 and 65,535 iterations, so a 2 bytes integer variablewas used.

� Synaptic weights: Almost all calculations are based on thesevariables, so to speed up the computations an integer variableof 2 bytes long was used.

� Synaptic potential (h): It is calculated as a result of a summa-tion of the synaptic weights, so not to saturate this value a typelong of 4 bytes in length is used.

According to the previous definitions, the maximum number ofneurons (NN) that can be implemented should verify the followingconstraint:

4 � NNþ2 � NNþ2 � NN � ðNIþ1Þþ4 � NNr1024; ð8Þwhere NI is the number of inputs. For a worst case (maximumnumber of inputs is 13), the maximum number of neurons is 28. Ifduring the learning process, the architecture reaches this max-imum number of neurons, the algorithm finishes and the sensoroutputs an error message.

The synaptic weights and synaptic potential have been imple-mented with 10 bits precision for the decimal part, so that thevalue of the weights will be between 32 and �32. Integer datatypes with values between �32,768 and 32,767 were used. Therepresentation of the synaptic potential is done in a similar way,except that for this value 4 bits were used, allowing valuesbetween 2,097,152 and �2,097,152. The computation of Tfac isdone using a float data type because it involves an exponentialoperation that can only be done with this type of data, but as itscomputation involves integer values, two different conversionsmust be done. The first conversion is done in Eq. (3), where Tfacvalues must be converted to fixed point representation, operationdone by multiplying the Tfac value by 1024. The second conversionis performed in Eq. (4) where the synaptic potential (h) has to beconverted to floating point representation for the calculation of anexponential function. In this case, ten right logical shifts were usedin the synaptic potential, to then convert its data type to floatingpoint for the final calculation of the Tfac. In order to avoid thesaturation of the synaptic weights value, when any of these valuesare larger than 32 or lower than �32, all weights are divided bytwo (a right logical shift) when any of them reach an absolutevalue of 30. This change does not affect the algorithm executionbecause neural networks are invariant to this kind of scaling.

5. Results

We first tested the correct functioning of the microcontrollerimplementation of the C-Mantec algorithm comparing to theoriginal published results (Subirats et al., 2012) in terms of thenumber of neurons generated in the neural network architectures,analysing as well the execution time for the microcontroller usingfloating point and integer representation. A set of 14 Booleanfunctions have been used for the comparison, including 12 singleoutput functions from the MCNC circuits testing benchmark plustwo XOR functions with two and three inputs. The C-Mantecalgorithm was run with the following parameter configuration:gfac¼0.05 and Imax¼1000. The results are shown in Table 1 wherethe first two columns indicate the function reference name and itsnumber of inputs, and the third, fourth and fifth columns showthat the number of neurons obtained in the original C-Mantecpublication (Subirats et al., 2012), and the integer and floatingpoint microcontroller implementation, respectively. Two last col-umns in Table 1 show the learning times in seconds (s) for theinteger and floating point implementation of the microcontroller.The displayed values (mean7standard deviation) are computedfrom 50 random samples.

Fig. 3 shows the learning time for all 14 benchmark functionsincluded in Table 1 both for the integer and floating pointimplementation. The top graph in the figure shows on a logarith-mic scale for the y-axis the learning time while the bottom graphshows the comparative speed between both representations used(integer and floating point).

Further, Fig. 4 shows the temporal evolution for the mostcomplex analysed function (Alu2k), where the top graph showsthe execution times related to the addition of a new neuron in the

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188182

Page 5: Smart sensor/actuator node reprogramming in changing

constructive network architecture while Fig. 4 bottom shows therelative speed between the two implementations (integer andfloating point). The time shown in the top graph of Fig. 4 includesseveral learning cycles until an input pattern wrongly classified bythe network is presented and there is no neuron in the architec-ture that can be selected to learn it (according to the values of Tfacand gfac).

The C-Mantec algorithm includes three procedures that aremodified depending on the data type (floating or integer) repre-sentation used. These three functions that can be observed in Fig. 1are the following: Input a random pattern, Select neuron with largestTfac value, and Modify selected neuron weights. The algorithm

includes other two procedures (Add a new neuron and Eliminatenoisy examples) that are not altered if the data representation ischanged.

For each of the procedures that change according to therepresentation used we computed the mean execution timeaveraged across 50 samples for both cases. Fig. 5 shows the resultsfor each procedure for integer and floating point representations.Fig. 5a shows the whole network execution time as the number ofneurons increases from 1 to 20 for different input pattern dimen-sions (2, 5, 10, 15), while Fig. 5b shows a comparison between bothimplementations, where it can be appreciated the number of timesthat the integer representation is faster than the floating pointone. Fig. 5c and d shows the execution time and relative compar-ison (respectively) for the computation of the maximum value ofthe Tfac for both representations as the number of neurons presentin the architecture increases from 1 to 20. Fig. 5e and f shows thesame as the previously described case but for the procedureModify selected neuron weights.

6. Case studies

In this section, in order to test the efficiency of using neuralnetworks in applications where microcontrollers for sense and/oract on the environment might be required, three case studies ofdifferent nature have been considered. The first case deals withdetecting fires by an alarm system that can be reprogrammedaccording to the security level required. The second analysed caseis a control system for the activation of a solenoid valve thatdepends on environmental variables, while the third case corre-sponds to a person fall detection problem with the constraint ofusing only a limited set of data patterns. The parameter settings ofthe C-Mantec algorithm were the same for all analysed cases:gfac¼0.05, Imax¼1000 and ϕ¼2. We verified that changes on thesevalues did not improve significantly the performance on each ofthe problems and then decided to use this set of parameter in allthree cases. We note that it has been previously verified that

Table 1Number of neurons and learning time of the synthesis of a set of functions forfixed-point (integer) and floating point implementations compared with theoriginal paper.

Funct. # Inp. # Neurons Time (s)

Theory Integer Floating Integer Floating

XOR2 2 2.07 0.0 2.07 0.0 2.07 0.0 0.367 0.01 0.577 0.03XOR3 3 3.07 0.0 3.07 0.0 3.07 0.0 1.47 0.11 2.227 0.26cm82af 5 3.07 0.0 3.07 0.0 3.07 0.0 1.847 0.28 3.157 0.64cm82ag 5 3.07 0.0 3.67 0.6 3.67 0.7 4.117 1.88 8.107 4.41cm82ah 5 3.07 0.0 3.07 0.0 3.07 0.0 1.857 0.21 3.357 0.63z4ml24 7 1.07 0.0 1.07 0.0 1.07 0.0 0.237 0.05 0.477 0.13z4ml25 7 3.17 0.7 3.17 0.4 3.17 0.4 3.367 1.04 6.437 2.05z4ml26 7 3.07 0.0 3.07 0.0 3.07 0.0 2.677 0.45 5.397 1.119symml 9 3.07 0.0 3.07 0.0 3.07 0.0 4.437 0.58 12.87 2.6alu2k 10 11.27 0.9 12.77 0.9 12.37 0.7 2207 50 9697 260alu2m 10 2.07 0.0 2.07 0.0 2.07 0.0 3.947 0.21 13.17 0.9alu2n 10 1.07 0.0 1.07 0.0 1.07 0.0 0.417 0.15 0.997 0.41alu2o 10 11.27 0.9 13.07 0.8 12.47 0.8 3127 59 14447 824alu2p 10 3.07 0.0 3.07 0.0 3.07 0.0 20.87 43.7 84.17 17.2

10−1

100

101

102

103

104

z4ml24XOR2

Alu2nXOR3

cm82afcm82ah

z4ml26z4ml25

Alu2mcm82ag

9symml

Alu2pAlu2k

Alu2o

Learning time of the set of functions

Tim

e (s

)

Integerfloat point

0

1

2

3

4

5

6

z4ml24XOR2

Alu2nXOR3

cm82af

cm82ahz4ml26

z4ml25Alu2m

cm82ag9symml

Alu2pAlu2k

Alu2o

Comparative of speed

Num

ber o

f tim

es

Times

Fig. 3. Learning time (in seconds, logarithmic scale) for each function of theBoolean function benchmark data set with the two data type representation (topgraph) and the relative speed between them (bottom graph).

0 2 4 6 8 10 12 140

200

400

600

800

# neuron

Tim

e (s

)

Time per active neurons

Integerfloat point

0 2 4 6 8 10 12 141.5

2

2.5

3

3.5

4

4.5

# neuron

Num

ber o

f tim

es

Comparative of speed

times

Fig. 4. Temporal evolution for the function Alu2k related to the addition of a newneuron to the network architecture. Absolute time (top graph) and relative time(bottom graph) for both representations (integer and floating point) used.

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188 183

Page 6: Smart sensor/actuator node reprogramming in changing

C-Mantec is very robust against changes on the parameter values(Subirats et al., 2012, 2013; Urda et al., 2013; Luque-Baena et al.,2013).

6.1. Fire alarm system

Fire alarm systems are widely used in inhabited premises.We analyse the case of a system whose security levels can bereprogrammed according to the user convenience, by definingactivation thresholds as a function of the sensed environmentalvariables (temperature, smoke and gas). A schematic drawing of aroom where a fire alarm is installed is shown in Fig. 6 includingthe three typical variables that can affect the system.

We have represented the different states of the fire alarmsystem using a truth table which is shown in Table 2.

Each different sensor (Gas, Smoke and Temperature) can beactive (‘1’) or not (‘0’), and these levels determine the Alarm statein the range from 1 to 8. As C-Mantec is a binary classifier, thealarm levels were codified in binary notation, as indicated in thelast column of the table. A given functioning state of the alarmsystem includes setting the alarm levels for each of the detectorsstates, i.e., choosing the alarm level for a given sensor inputs.

Table 2 corresponds to a case in which each alarm level is differentfor every different input, as this is the worst possible case.

We first checked the time employed by the system to learn theinitial state, measuring also the number of neurons included bythe C-Mantec constructive algorithm and the energy consumption.Table 3 shows the results averaged across 50 samples, where

0 5 10 15 200

500

1000

1500

2000

2500

3000

3500

4000

# neurons

Tim

e (µ

s)

Network output time

15 inputs

10 inputs

5 inputs

2 inputs 15 inputs10 inputs5 inputs2 inputs

IntegerFloating point

0 5 10 15 202.5

3

3.5

4

4.5

5

5.5

# neurons

Num

ber o

f tim

es

2 inputs5 inputs10 inputs15 inputs

0 5 10 15 200

1000

2000

3000

4000

5000

# neurons

Tim

e ( µ

s)

max(Tfac(i)) calculation time

IntegerFloating point

0 5 10 15 200.8

1

1.2

1.4

1.6

# neurons

Num

ber o

f tim

es

Times

0 5 10 150

100

200

300

400

Tim

e ( µ

s)

Neurons modification time

IntegerFloating point

0 5 10 151

2

3

4

5

Num

ber o

f tim

es

Times

Fig. 5. Execution times of the three procedures included in the C-Mantec algorithm that changed depending on the data type representation used. Graphs in the first columnshow the absolute times measured in μs while the graphs in the second column show a comparison between both execution times (see the text for more details).

Fig. 6. Schematic representation of a room in which is installed a fire alarm.

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188184

Page 7: Smart sensor/actuator node reprogramming in changing

execution time is measured in milliseconds (ms) while energyconsumption is measured in nano Amperes per hour (nAh).

Next, we have tested the time needed to relearn a modificationof the state of the alarm, that corresponds to a change of values inthe fourth column in Table 2. The results are shown in Table 4where mean, maximum and minimum reprogramming time aredisplayed together with the measured consumption.

6.2. Weather prediction

Weather prediction is a relevant issue in human daily life,related for example for making good decisions in agriculture(moment of harvest, time of watering and crop type). Weatherprediction is a very complex problem and neural network basedsystem has been widely used for this task (Taylor and Buizza,2002; Chakraborty et al., 2004).

We have consider a system that consists in a sensor/actuatorthat measures five environmental variables (Temperature (T),Wind speed (Ws), Wind direction (Wd), Humidity (H) and SolarIrradiance (I)) and takes a decision, which can be to irrigate or notthe surrounding land. Fig. 7 displays a real picture of a constructedsensor/actuator node that may be used for the described problem.

To analyse a possible scenario where this node may be used, weconsider the case of determine whether to water or not thesurrounding land according to the value of the five environmentalvariables mentioned before. For the implementation these vari-ables have been discretised using a 12 bits representation wherethe number of bits used for each variable can be seen from Table 5together with the discretisation intervals.

The evaluation of the implementation of the neural networkmodel for controlling the actuator system has been carried outstarting from a null initial state. Afterwards, random input condi-tions were considered. An example of a generic instance can be:

ifðT ¼ 17:5 1CÞ4ðWs ¼ 2 m=sÞ4 ðWd¼ “N” Þ4 ðH¼ 55%Þ4ðI ¼ 900 W=m2Þ-

solenoid valve “open”.

In the previous case, the indicated instance corresponds to ainput of the truth table of the form “0100 00 00 10 11” with output‘1’ as it was indicated that the solenoid valve, controlling thewatering system, should open for the indicated condition. Thewhole truth table for the current discretisation used comprises4096 different instances.

We measured the evolution of the time needed by the systemas the number of defined instances is increased, together with thelevel of accuracy. Fig. 8 top shows time and consumption averagedacross 30 random executions (Mean) and also one randomlychosen execution of the system in order to appreciate the existingvariability of the learning process. Accuracy, computed as thefraction of correct responses over the whole truth table is shownin table Fig. 8 bottom. Note that to compute the accuracy, wesuppose that the final whole truth table is known at every giventime, even if the actuator only gets this information one pattern ata time, implying that in practice the accuracy (as it is computed)can be only analysed at the end of the process. If we had computedthe accuracy over the presented pattern, this would have alwaysbeen equal to 1.

A singular behaviour in the initial instances can be observed inFig. 8, the neural network model has to be modified becausealmost all instances are misclassified thus at the beginning of theprocess, the learning time is higher and the accuracy increasesbecause the number of classified instances grows. When themodel has already learned almost all instances the learning timegrows linearly depending on the number of instances as theresults showed, and thus the accuracy is one in practice. In thismoment if a instance are misclassified then the neural networkmodel have to learn this instances and the learning time is thelargest as Fig. 8 top shows for one execution.

Table 2Truth table of the initial state of the system for fire detection.

Gas Smoke Temperature Alarm Representation

0 0 0 1 0 0 00 0 1 2 0 0 10 1 0 3 0 1 00 1 1 4 0 1 11 0 0 5 1 0 01 0 1 6 1 0 11 1 0 7 1 1 01 1 1 8 1 1 1

Table 3Time, energy consumption and number of neurons in the constructed neuralnetwork architecture for learning the initial state of a reprogrammable alarmsystem.

# Neurons Time (ms) Consumption (nAh)

3.07 0.0 8.27 4.8 73.17 43.4

Table 4Mean, maximum and minimum time involved in reprogramming a microcontrollerand its corresponding energy consumption for a fire alarm system.

Max Min Mean

Time (ms) 20.27 3.13 10.97Consumption (nAh) 180.17 27.82 97.51

Fig. 7. Picture of a sensor/actuator system responsible for sensing differentenvironmental variables and acting accordingly.

Table 5Environmental variables used in a weather prediction problem, indicating thenumber of bits used in their representations and the discretisation intervals used.

Var. # bits Discretisation

T 4 [�2.5, 0, 2.5, 5, 7.5, 10, 12.5, 15, 17.5,20, 22.5, 25, 27.5, 30, 32.5, 35] 1C

Ws 2 [0, 2, 5, 9] m/sWd 2 N: [3351–451), E: [451–1351),

S: [1351–2251), W: [2251–3351)H 2 [5, 30, 55, 80]%I 2 [100, 300, 600, 900] W/m2

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188 185

Page 8: Smart sensor/actuator node reprogramming in changing

6.3. Fall detection system

Monitoring elderly or disabled people is in growing demand inmodern societies, mainly because these people are dependent andunable to care for themselves. In particular, one important pro-blem is to detect efficiently when a person falls. Different ways ofaddressing this problem have been proposed, like using a videosurveillance systems, although if such system has the drawback ofa strong sensitivity to light changes. A more efficient system todetect a person fall can be constructed using a sensor attached tothe person body. The sensor, in this case, should include a 3-axisaccelerometer together with a microcontroller that analyses theperson relative inclination angle and movement, in order to detectabnormal situations, such as a strong downward movement or anysudden or violent displacement. Fig. 9 shows a schematic drawingof the described system.

Deducing the logic which describes the behaviour of the falls isa complex and very time consuming task. However, using a neuralnetwork based system simplifies enormously this task as a trainingprocess based on observed patterns can be implemented.

In such a system, the accelerometer senses the position of thedevice, acquiring the position in relationship to axes x, y, z. Thesecoordinates are then sent to the microcontroller, in values normal-ised between 0 and 255 so they can be stored in a variable of“byte” type. A movement is considered a fall when in a shortperiod of time (1 s), the position of the device changes from aninitial state (vertical) to a final state (horizontal). Thus, a pattern tothe system consists in the initial and final position of the move-ment plus the output that indicates whether the person hassuffered or not a fall ððx0; y0; z0; x1; y1; z1Þ-fall?Þ. The number ofpatterns that can be stored on the system is delimited by Eq. (7),that in the present case leads to a maximum number of patterns tobe stored equal to 167 patterns. As this number is quite small forsuch a complex problem, only patterns that a “supervisor” con-sidered wrong will be used. For example, a pattern is sensed everysecond and the system determines whether or not a fall hasoccurred. If the supervisor (that during the training of the systemcan be the person carrying the device) resolves that the decision ofthe neural network model is wrong, this pattern is stored inmemory and a new neural network model is calculated. Theprevious description is a modification to the standard C-Mantecalgorithm that includes a more efficient storage of patterns formaximising the use of the limited memory resources of a micro-controller. For testing the system, a sensor node has been attachedto a subject producing common movements such as sitting down,walking and stopping. The microcontroller has periodically gath-ered accelerometer data every second and 30,000 patterns havebeen obtained as data set.

Afterwards, the neural network based systems were checked,starting the process with no patterns in the memory, to only storeone instance of the problem when the classification obtained doesnot match the supervisor output, that in our case, was the personwearing the system.

Fig. 10 top shows the time evolution of the system as thenumber of patterns presented increase. In the figure, the mean of

0 500 1000 1500 2000 2500 3000 3500 40000

0.5

1

1.5

2

2.5

3

3.5

4Learning time depending on the number of learned instances

Tim

e(s)

Number of instances

0

8.8

17.7

26.6

35.5

Consum

ption ( µAh)

One executionMean

0 500 1000 1500 2000 2500 3000 3500 40000.4

0.5

0.6

0.7

0.8

0.9

1

Accuracy depending on the number of learned instances

Acc

urac

y

Number of instances

One executionMean

Fig. 8. Learning time, consumption (top graph) and accuracy (bottom graph) of asensor/actuator used for an automatic watering system. (See text for details).

Fig. 9. A sensor based system attached to a person body for the implementation ofa person fall detection system.

5 10 15 20 25 30 35 40 4510−2

10−1

100

101

102Learning time depending on the number of patterns in the learning set

Tim

e(m

s)

Number of patterns

0.088

0.88

8.88

88.8

888.8

Consum

ption (nAh)

One executionMean

5 10 15 20 25 30 35 40 450.6

0.7

0.8

0.9

1

Accuracy depending on the number of patterns in the learning set

Acc

urac

y

Number of patterns

One executionMean

Fig. 10. Training time (top) and accuracy (bottom) of a fall detection system as afunction of the number of observed patterns. Each graph includes average resultsover 30 independent samples, together with results corresponding to a single case.

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188186

Page 9: Smart sensor/actuator node reprogramming in changing

30 executions is displayed together with a single case to observethe level of variability existing in the problem. Fig. 10 bottomshows the level of accuracy obtained as the patterns are presentedto the system (the graph also includes the mean and a singleobserved case).

It is worth mentioning that the way this problem is treated isdifferent from the previous two cases, as patterns are stored in themicrocontroller memory only if they are misclassified.

7. Conclusions

The C-Mantec constructive neural network algorithm has beensuccessfully implemented in a microcontroller board, adapting it toovercome the limitations imposed by the limited resources ofmemory and computing speed of the hardware device. The correctimplementation of the algorithm has been verified in comparison tothe original published results, obtaining that as the number of inputsis increased, the microcontroller implementation needs a smallnumber of extra neurons and it is possible to observe a littlereduction in performance accuracy due to rounding effects. Further-more, a thorough comparison of the differences of using floatingpoint or fixed precision representations has be carried out, conclud-ing that better results can be obtained with an 8-bit fix precisionrepresentation leading to computation times approximately fivetimes faster than using the standard floating point representation.

The implemented algorithm has been employed as a sensor/actuator system and applied to three case studies in order todemonstrate the efficiency and versatility of the resulting applica-tion. The three case studies chosen are problems defined inchanging environments, and thus the decision-making of thesensor/actuator has to be adapted accordingly to the observedchanges, thus needing a retraining of the neural network modelthat controls the decision process.

The observed reprogramming times are significantly low in thethree case studies, being the energy consumption of the devicealso quite small, and even if a comparison to the traditional case inwhich the new code has to be transmitted from a central controlunit has not been analysed, the results suggest a very importantpotential reduction.

As an overall conclusion, we have shown the suitability of C-Mantec for its application in a dynamic task using an Arduino UNOmicrocontroller. Nowadays, given the existence of devices withmuch more powerful computing resources than the consideredboard, the present study confirms the potential of the proposedalgorithm for its application in real tasks where sensors/actuatorsare needed.

Acknowledgements

The authors acknowledge support from Junta de Andalucíathrough Grant Nos. P10-TIC-5770 and P08-TIC-04026, and fromCICYT (Spain) through Grant No. TIN2010-16556 (all includingFEDER funds).

References

Aiello, F., Bellifemine, F.L., Fortino, G., Galzarano, S., Gravina, R., 2011. An agent-based signal processing in-node environment for real-time human activitymonitoring based on wireless body sensor networks. Eng. Appl. Artif. Intell. 24,1147–1161.

Aleksendrić, D., Jakovljević, I., Irović, V., 2012. Intelligent control of braking process.Expert Syst. Appl. 39.

Angelov, P., 2010. Evolving Takagi–Sugeno fuzzy systems from data streams (eTSþ).In: Angelov, P., Filev, D., Kasabov, N. (Eds.), Evolving Intelligent Systems. JohnWiley and Sons and IEEE Press, IEEE Press Series in Computational Intelligence,pp. 21–50

Atmel, Datasheet 328. http://www.atmel.com/Images/doc8161.pdf.Canete, E., Chen, J., Luque, R., Rubio, B., 2012. Neuralsens: a neural network based

framework to allow dynamic adaptation in wireless sensor and actor networks.J. Netw. Comput. Appl. 35, 382–393.

Cela, A., Yebes, J.J., Arroyo, R., Bergasa, L.M., Barea, R., López, E., 2013. Complete low-cost implementation of a teleoperated control system for a humanoid robot.Sensors 13, 1385–1401.

Chakraborty, S., Ghosh, R., Ghosh, M., Fernandes, C.D., Charchar, M.J., Kelemu, S.,2004. Weather-based prediction of anthracnose severity using artificial neuralnetwork models. Plant Pathol. 53, 375–386.

Dhanalakshmi, P., Palanivel, S., Ramalingam, V., 2011. Pattern classification modelsfor classifying and indexing audio signals. Eng. Appl. AI 24, 350–357.

Farooq, U., Amar, M., ulHaq, E., Asad, M.U., Atiq, H.M., 2010. Microcontroller basedneural network controlled low cost autonomous vehicle. In: Proceedings of the2010 Second International Conference on Machine Learning and Computing,IEEE Computer Society, Washington, DC, USA, pp. 96–100.

Franco, L., Elizondo, D., Jerez, J., 2009. Constructive Neural Networks. Springer-Verlag, Berlin

Frean, M., 1990. The upstart algorithm: a method for constructing and trainingfeedforward neural networks. Neural Comput. 2, 198–209.

Gómez, I., Franco, L., Jerez, J., 2009. Neural network architecture selection: canfunction complexity help?. Neural Process. Lett. 30, 71–87.

Han, C.C., Kumar, R., Shea, R., Srivastava, M., 2005. Sensor network software updatemanagement: a survey. Int. J. Netw. Manag. 15, 283–294.

Haykin, S., 1994. Neural Networks: A Comprehensive Foundation. Prentice HallHuang, G.B., Saratchandran, P., Sundararajan, N., 2005. A generalized growing and

pruning rbf (ggap-rbf) neural network for function approximation. IEEE Trans.Neural Netw. 16, 57–67.

Hunter, D., Hao, Y., Pukish, M., Kolbusz, J., Wilamowski, B., 2012. Selection of properneural network sizes and architectures – a comparative study. IEEE Trans. Ind.Appl. 8, 228–240.

Kodogiannis, V.S., Boulougoura, M., Wadge, E., Lygouras, J.N., 2007. The usage ofsoft-computing methodologies in interpreting capsule endoscopy. Eng. Appl. AI20, 539–553.

Kopetz, H., 1997. Real-Time Systems: Design Principles for DistributedEmbedded Applications, 1st edition Kluwer Academic Publishers, Norwell,MA, USA

Kornuta, J.A., Nipper, M.E., Dixon, Brandon, 2013. Low-cost microcontroller plat-form for studying lymphatic biomechanics in vitro. J. Biomech. 46 (1), 183–186.

Lakshmi, K., Subadra, M., 2013. A survey on FPGA based MLP realization for on-chiplearning. Int. J. Sci. Eng. Res., 1–9.

Lian, K.Y., Hsiao, S.J., Sung, W.T., 2013. Intelligent multi-sensor control system basedon innovative technology integration via zigbee and WI-FI networks. J. Netw.Comput. Appl. 36, 756–767.

Lughofer, E., 2011. Evolving Fuzzy Systems – Methodologies, Advanced Conceptsand Applications. Studies in Fuzziness and Soft Computing, vol. 266. Springer,http://dx.doi.org//10.1007/978-3-642-18087-3.

Luque-Baena, R.M., Urda, D., Subirats, J.L., Franco, L., Jerez, J.M., 2013. Analysis of cancermicroarray data using constructive neural networks and genetic algorithms. In:Rojas, I., Guzman, F.M.O. (Eds.), Proceedings of the IWBBIO, International Work-Conference on Bioinformatics and Biomedical Engineering, pp. 55–63.

Mahmoud, S., Lotfi, A., Langensiepen, C., 2013. Behavioural pattern identificationand prediction in intelligent environments. Appl. Soft Comput. 13, 1813–1822.

Mamdoohi, G., Fauzi Abas, A., Samsudin, K., Ibrahim, N.H., Hidayat, A., Mahdi, M.A.,2012. Implementation of genetic algorithm in an embedded microcontroller-based polarization control system. Eng. Appl. Artif. Intell. 25, 869–873.

Marwedel, P., 2006. Embedded System Design. Springer-Verlag, New York, Inc.,Secaucus, NJ, USA

Ortega-Zamorano, F., Subirats, J.L., Jerez, J.M., Molina, I., Franco, L., 2013. Imple-mentation of the C-Mantec neural network constructive algorithm in anarduino UNO microcontroller. Lect. Notes Comput. Sci. 7902, 80–87.

Oxer, J., Blemings, H., 2009. Practical Arduino: Cool Projects for Open SourceHardware. Apress, Berkeley, CA, USA

Park, K., Shin, H., 2013. Stock price prediction based on a complex interrelationnetwork of economic factors. Eng. Appl. AI 26, 1550–1561.

Rassam, M., Zainal, A., Maarof, M., 2013. An adaptive and efficient dimensionreduction model for multivariate wireless sensor networks applications. Appl.Soft Comput. 13, 1978–1996.

Sayed-Mouchaweh, M., Lughofer, E., 2012. Learning in Non-Stationary Environ-ments: Methods and Applications. Springer, New York

Sengupta, S., Das, S., Nasir, M., Panigrahi, B.K., 2013. Multi-objective node deploy-ment in WSNS: in search of an optimal trade-off among coverage, lifetime,energy consumption, and connectivity. Eng. Appl. Artif. Intell. 26, 405–416.

Shaikh, R., Thakare, V., Dharaskar, R., 2010. Efficient code dissemination repro-gramming protocol for WSN. Int. J. Comput. Netw. Secur. 2, 116–122.

Subirats, J., Franco, L., Jerez, J., 2012. C-mantec: a novel constructive neural networkalgorithm incorporating competition between neurons. Neural Netw. 26,130–140.

Subirats, J.L., Baena, R.M.L., Urda, D., Ortega-Zamorano, F., Jerez, J.M., Franco, L.,2013. Committee C-Mantec: a probabilistic constructive neural network. Lect.Notes Comput. Sci. 7902, 339–346.

Taylor, J.W., Buizza, R., 2002. Neural network load forecasting with weatherensemble predictions. IEEE Trans. Power Syst. 17, 626–632.

Urda, D., Canete, E., Subirats, J.L., Franco, L., Llopis, L., Jerez, J.M., 2012. Energy-efficient reprogramming in WSN using constructive neural networks. Int. J.Innov. Comput. Inf. Control 8, 7561–7578.

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188 187

Page 10: Smart sensor/actuator node reprogramming in changing

Urda, R.M., Luque, M.J., Jiménez, I., Turias, L. Franco, Jerez, J. M., 2013 A constructiveneural network to predict pitting corrosion status of stainless steel. LectureNotes in Computer Science, 7902, pp. 88-95, (2013). ISBN: 978-3-642-38678-7.

Wang, J., Xu, W., Gong, Y., 2010. Real-time driving danger-level prediction. Eng.Appl. Artif. Intell. 23, 1247–1254.

Wang, Q., Zhu, Y., Cheng, L., 2006. Reprogramming wireless sensor networks:challenges and approaches. Netw. IEEE 20, 48–55.

Yick, J., Mukherjee, B., Ghosal, D., 2008. Wireless sensor network survey. Comput.Netw. 52, 2292–2330.

Zhai, Y.J., Yu, D.L., 2009. Neural network model-based automotive engineair/fuel ratio control and robustness evaluation. Eng. Appl. Artif. Intell. 22,171–180.

F. Ortega-Zamorano et al. / Engineering Applications of Artificial Intelligence 30 (2014) 179–188188