embedded real-time video surveillance system based on

15
Appl. Math. Inf. Sci. 12, No. 2, 345-359 (2018) 345 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.18576/amis/120209 Embedded Real-Time Video Surveillance System based on Multi-Sensor and Visual Tracking Laith M. Fawzi 1,* , Sideeq Y. Ameen 2 , Salih M. Alqaraawi 3 , and Shefa A Dawwd 1 1 Department of Computer Engineering, University of Mosul, Mosul, Iraq 2 Deanship of Research and Graduate Studies, Applied Science University, Bahrain 3 Department of Computer Engineering, University of Technology, Baghdad, Iraq Received: 10 Dec. 2017, Revised: 12 Jan. 2018, Accepted: 17 Jan. 2018 Published online: 1 Mar. 2018 Abstract: This Paper describes the design and implementation of an embedded remote video surveillance system for general purposes security application. The proposed system is able to detect and report vandalism, tampering, and theft activities before they take place via SMS, Email or by a phone call. The system has been enriched with a vast range of sensors to increase its sensing capability of different types of attacks. Moreover, the proposed system has been enhanced by adding a visual verification technique to overcome false alarms generated by sensors, where a video camera is integrated within the system software to capture video footage, verify, and track the abnormal events taking into consideration bandwidth consumption and real-time processing. Finally, the system was implemented using SBC (Raspberry Pi) as a working platform supported by OpenCV and Python as a programming language. The results proved that the proposed system can achieve monitoring and reporting in real-time. Where the average processing time specified to complete all the required tasks for each frame (starting from video source to broadcasting stage) does not exceed 64%. Moreover, the proposed system achieved a reduction in the utilized data size as a result of using image processing algorithms, reaching an average of 91%, which decreased the amount of transferred data to an average of 13.4 Mbit/sec and increased the bandwidth efficiency to an average of 92%. Finally, this system is characterized by being flexible, portable, easy to install, expandable and cost-effective. Therefore, it can be considered as an efficient technology for different monitoring purposes. Keywords: Embedded system, multi-sensors, visual verification, remote monitoring, keyframe, subimage. 1 Introduction In recent years, there has been a great deal of interest in video surveillance systems to meet the requirements of different security aspects such as industrial controlling and monitoring, environmental monitoring, personal care and so on. They have contributed significantly to the reduction of crimes and the preservation of property. Starting from small offices to large factories, these systems became very necessary and important to satisfy the security aspects in various areas of life [1]. Video surveillance technology enables a single operator monitoring of various activities in a complex zone using a distribution of network video sensors (cameras). The goal of these systems is to collect and spread real-time information from the monitoring area to the observer person in order to improve the awareness against negative phenomena [2]. Moreover, the capturing, processing, and transmitting of video footage helps in the documentation process and decision making. Therefore, these systems can bring peace of mind and improve the management of organizations. Since video cameras became widely available in the market with different specifications and reasonable prices, surveillance systems became more popular [3]. Early surveillance systems were used to monitor large areas for security purposes. The video cameras were constantly in monitoring mode of the area of interest and the data streams were sent to the central station for further processing. In these systems, the energy, storage capacity, and bandwidth were not a real problem. Each camera has its own power supply and is connected to a server site via a wired link. However, these systems were not smart enough to detect any change by themselves [4]. At present, the researchers are continuously working towards a networked surveillance systems. The reason for this trend is the increasing * Corresponding author e-mail: Laith [email protected] c 2018 NSP Natural Sciences Publishing Cor.

Upload: others

Post on 30-May-2022

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) 345

Applied Mathematics & Information SciencesAn International Journal

http://dx.doi.org/10.18576/amis/120209

Embedded Real-Time Video Surveillance System basedon Multi-Sensor and Visual Tracking

Laith M. Fawzi1,∗, Sideeq Y. Ameen2, Salih M. Alqaraawi3, and Shefa A Dawwd1

1 Department of Computer Engineering, University of Mosul, Mosul, Iraq2 Deanship of Research and Graduate Studies, Applied ScienceUniversity, Bahrain3 Department of Computer Engineering, University of Technology, Baghdad, Iraq

Received: 10 Dec. 2017, Revised: 12 Jan. 2018, Accepted: 17 Jan. 2018Published online: 1 Mar. 2018

Abstract: This Paper describes the design and implementation of an embedded remote video surveillance system for general purposessecurity application. The proposed system is able to detectand report vandalism, tampering, and theft activities before they takeplace via SMS, Email or by a phone call. The system has been enriched with a vast range of sensors to increase its sensing capability ofdifferent types of attacks. Moreover, the proposed system has been enhanced by adding a visual verification technique toovercome falsealarms generated by sensors, where a video camera is integrated within the system software to capture video footage, verify, and trackthe abnormal events taking into consideration bandwidth consumption and real-time processing. Finally, the system was implementedusing SBC (Raspberry Pi) as a working platform supported by OpenCV and Python as a programming language. The results provedthat the proposed system can achieve monitoring and reporting in real-time. Where the average processing time specifiedto completeall the required tasks for each frame (starting from video source to broadcasting stage) does not exceed 64%. Moreover, the proposedsystem achieved a reduction in the utilized data size as a result of using image processing algorithms, reaching an average of 91%,which decreased the amount of transferred data to an averageof 13.4 Mbit/sec and increased the bandwidth efficiency to anaverage of92%. Finally, this system is characterized by being flexible, portable, easy to install, expandable and cost-effective. Therefore, it can beconsidered as an efficient technology for different monitoring purposes.

Keywords: Embedded system, multi-sensors, visual verification, remote monitoring, keyframe, subimage.

1 Introduction

In recent years, there has been a great deal of interest invideo surveillance systems to meet the requirements ofdifferent security aspects such as industrial controllingand monitoring, environmental monitoring, personal careand so on. They have contributed significantly to thereduction of crimes and the preservation of property.Starting from small offices to large factories, thesesystems became very necessary and important to satisfythe security aspects in various areas of life [1]. Videosurveillance technology enables a single operatormonitoring of various activities in a complex zone using adistribution of network video sensors (cameras). The goalof these systems is to collect and spread real-timeinformation from the monitoring area to the observerperson in order to improve the awareness against negativephenomena [2]. Moreover, the capturing, processing, and

transmitting of video footage helps in the documentationprocess and decision making. Therefore, these systemscan bring peace of mind and improve the management oforganizations. Since video cameras became widelyavailable in the market with different specifications andreasonable prices, surveillance systems became morepopular [3]. Early surveillance systems were used tomonitor large areas for security purposes. The videocameras were constantly in monitoring mode of the areaof interest and the data streams were sent to the centralstation for further processing. In these systems, theenergy, storage capacity, and bandwidth were not a realproblem. Each camera has its own power supply and isconnected to a server site via a wired link. However, thesesystems were not smart enough to detect any change bythemselves [4]. At present, the researchers arecontinuously working towards a networked surveillancesystems. The reason for this trend is the increasing

∗ Corresponding author e-mail:Laith [email protected]

c© 2018 NSPNatural Sciences Publishing Cor.

Page 2: Embedded Real-Time Video Surveillance System based on

346 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

number of accidents and instability in most parts of theworld. By using several surveillance cameras, a hugeamount of visual data is captured. These data requirecontinuous monitoring and checking for the existence ofintrusion or any abnormal events. Therefore, such systemrequires constant human wakefulness. However, humanshave limited capacity to perform such tasks. Thus, thistask may become very boring and prone to human error.In addition, the huge amount of video data that isobtained from the surveillance cameras consumes a widerange of bandwidth and storage capacity when it istransmitted [5,6,7]. Today, current studies focus on theembedded smart surveillance systems rather thantraditional systems. Where these systems have the abilityto process visual data locally, communicate with eachother and with the server via wireless links; in addition tohaving more flexibility and scalability. However, thepower consumption and bandwidth are critical constraintsin these systems, as the nodes are powered by batteriesand the data is sent over a limited wireless bandwidth.Therefore, processing data locally can reduce the amountof transmitted data that effectively uses the bandwidth ofthe system. Moreover, this reduction will minimize theenergy consumption in communication [4,8,9,10,11,12,13]. In this paper, a smart real-time video surveillancesystem is presented to enable the observers to monitor thesafety of area of interest using a wide range of sensorssupported by a visual surveillance camera to increasesystem accuracy, reliability, and performance withminimal usage of bandwidth, power, and storage capacity.In addition, the proposed system was armed by a steppermotor controller to increase system Field of View (FoV)and keep object tracking. This system is able to workunder different operation modes (manual or automatic),mastered by a web server hosted on the surveillance node.The proposed system can ensure real event notification,via SMS, email, and phone call (ringing) to notify thepersons in charge that a new event has occurred. Finally,the system was implemented using SBC (Raspberry Pi)and Arduino Mega as a working platform supported byOpenCV and Python as a programming language.

2 The Proposed Surveillance SystemFrameworks

The proposed surveillance system is composed of twopart: hardware and software. The hardware part contains agroup of models used to implement the smart monitoringsystem. While the software part is responsible forinitialization, video capturing, image processing, objecttracking, notification, and video broadcasting. Theobserver can use a dedicated control room or any clientterminal (computer or smart phone) to control the entiresystem.

2.1 The Proposed System Hardware

The proposed system hardware is composed of fivecategories: a processing and controlling unit, multi-sensorbox, surveillance camera, system communicationmodules, and a power supply unit as shown in Figure1.

Fig. 1: The Entire Proposed System Hardware.

2.1.1 Processing and Controlling Unit

The processing and controlling unit runs a real-time videoprocessing system which can be used for smartmonitoring and surveillance applications. The unit wasimplemented using an ARM processor run on a 1GHzclock for processing with an Arduino Mega 2560microcontroller. The ARM processor withcommunication facilities, power supply, and the memoryare represented in the Raspberry Pi B+ (RPI). Moredetails about the specifications of Raspberry Pi Model B+are given in [14]. The Arduino Mega 2560 is amicrocontroller board based on the ATmega2560. Itsupports the main processing and controlling unit throughUSB cable via a dedicated message format prepared forthis purpose. The main task of the Arduinomicrocontroller is to control the camera movement inPan, Tilt, Zoom, and Focus via stepper motor shield stackon the GPIO bus of the Arduino. More details about theArduino Mega 2560 board specification are givenelsewhere [15]

2.1.2 Multi-Sensor Box

. In this work, a group of sensors is collected together viaArduino Mega 2560 board to form what are known asmulti-sensor boxes. These boxes are distributed for

c© 2018 NSPNatural Sciences Publishing Cor.

Page 3: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 347

sensing any event that falls within the surveillance zone.The number of multi-sensor boxes and the method ofdistribution depends on the zone to be monitored, inaddition to the required accuracy and the specification ofsensors used. These boxes are communicating with theprocessing and controlling unit (Raspberry) via anEthernet or Wi-Fi adapter. Therefore, two types ofmulti-sensor boxes are designed (PoE and near fieldWi-Fi). Each box contains multi types of sensors to sensetemperature, flame, motion, shock, and sound. Suchdiversity in sensors and method of connection gives theproposed surveillance system the ability and flexibility tomonitor the area of interest against different intrusions.Figure2 shows the proposed multi-sensor boxes.

Fig. 2: The Proposed Multi-Sensor Box, (a) POE Based, (b) Wi-Fi Based.

2.1.3 Surveillance Camera

The surveillance camera is used to capture video footagefrom the area of interest and verifies the availability of theevent that detected via multi-sensor box. The camera isdriven by stepper motor driver that make it capable ofmoving (-90 to +90) in Pan and (-15 to +55) in Tilt tokeep object tracking via the Arduino SPI interface.Furthermore, a pair of DC Motor controllers have beenused to control the zoom and the focus. The proposedvideo camera used in this project is a CamScan DSP 36xColor Video Camera with digital zoom, autofocus, andwide angle. For more details and specifications of thiscamera see [16].

2.1.4 System Communication Modules

System communication modules deal with sensors, videocamera, GPS system, Arduino microcontroller,headquarters, and observers via different types ofconnections methods, Wi-Fi, Ethernet, USB, and GSMnetworks. The Raspberry Pi B+ unit has severalconnection ports (USB, Ethernet, and GPIO), and the newversion of Raspberry Pi 3 has a built-in Wi-Fi port also.These ports have been exploited to achieve the researchobjectives and meet the main requirements for controllingand managing the system. The functions of these ports areto communicate with:-

1.Video input capture card via a USB2 port. In this casea USB video capture card was used for the purpose ofconnecting the analog video camera to RPI via USB2port, since the RPI does not has AV input. Choosing acapture device depends on the type of video formatthat wants to capture, frame rate, video editing, andOS support. For example, MPEG4 (sequence ofJPEG), @25 fps, and Linux OS support. Moreover,the proposed system can support any digital camera(webcam) via RPI USB2 port directly.

2.Arduino Mega controller board via another USB2port. Through this port and via shield stepper motordriver on Arduino Mega 2560, the system can controlthe Azimuth and Elevation of the camera in order tokeep object tracking and put it always in the middle ofthe scene. The Arduino motor shield R3 can be usedto control two 4 wire stepper motor (for Azimuth andElevation control) or four 2 wire DC motor.Therefore, another DC motor driver (L298N) wasused to adjust the camera zoom and focus. TheL298N DC motor driver is connected with theArduino Mega across pins 44, 45, 46, 47, 48, and 49.

3.GSM model via a GPIO pins numbers 1, 6, 8, and 10(3.3V, GND, Tx, and Rx) respectively. The GSM is aspecialized type of modem that accepts a SIM card,and works over a subscription to a mobile operator,just like a mobile phone. GSM modem allows theprocessing and controlling unit (Raspberry PI) tocommunicate over the mobile network via RPI serialport. In this work, the GSM modem is used to sendthe alert message (SMS), make a call, and transferdata though the GPRS to the group of persons incharge. In addition to make phone calls to verify thatthe warning is received. The GSM modem used isSIM800L, and for more details see [17].

4.Wi-Fi router via an Ethernet port to achieveconnection with multi-sensor boxes and theheadquarters (control room) through a wire andwireless connection method. Wi-Fi is based on the802.11 standards which is a set of standards forwireless communication technology created anddeveloped by the Institute of Electrical andElectronics Engineers (IEEE). These standards aremainly used for implementing Wireless Local Area

c© 2018 NSPNatural Sciences Publishing Cor.

Page 4: Embedded Real-Time Video Surveillance System based on

348 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

Networks (WLAN) in the frequency bands of 2.4GHz to 5 GHz. The most common protocols of802.11 are those defined by 802.11a, 802.11b,802.11g, and 802.11n. For more details see reference[18,19]. The latest release of the series IEEE 802.11is 802.11n in 2009, which is characterized byincreased throughput and range of wireless networksthrough the use of Multiple-Input-Multiple-Output(MIMO) antenna technology. In addition, it involvesincreasing the channel bandwidth from 20MHz to40MHz. Therefore, it supports maximum Bandwidth,up to 300 Mbps [19]. Today, most high-rate wirelesssystems use MIMO technologies, including 802.11n,LTE and WiMAX [18]. In order to transfer the datagathered by the sensors and surveillance camera,several technologies can be used such as dedicatedchannels, ZigBee, LTE, Wi-Fi, and WiMAX. Thesetechnologies however need to take into considerationthe main requirements, adequate bandwidth, and lowcost. ZigBee was excluded due to low bandwidth andcoverage range whereas LTE is also excluded due tohigh cost and low coverage area and dedicatedchannels was eliminated due to high cost despitebeing the best in terms of bandwidth. Therefore, thetechnology that considered in the research projectrequirements in terms of cost, coverage, andbandwidth is the common Wi-Fi.

5.GPS model via GPIO Arduino Mega 2560 boardusing pins numbers 18 (Tx), 19 (Rx), 5v, and GND.The Global Positioning System (GPS) is a navigationsystem based on GPS satellites that provideinformation on the Positioning, Navigating, andTiming (PNT) anywhere on Earth via four or moresatellites located in the line of sight. The developmentof GPS has helped the advancement of manyapplications that affect many sides of the modern life.Where now, most electronic devices use GPStechnology such as cars, watches, cell phones, ATMmachines and so on [18]. In this work, the GPSmodem is used to receive the satellites signals whichrepresent the coordinates (latitudes, longitudes, andaltitudes) and pass it to the Arduino microcontroller.This information is very useful for determining thetime and site of the threat. In addition, the GPS clocksignal is used to generate a synchronization amongsurveillance nodes; especially if more than one nodehas data to send simultaneously. The GPS type usedin this work is GTPA013, it is installed inside theSWVSB and connected to the single board computer(Raspberry PI) through a microcontroller (ArduinoMega). For more information about the GPS modulesee [20].

Figure3, illustrates the entire system communicationmodules.

Fig. 3: The Entire System Communication Modules.

2.1.5 Power Supply Unit

The system has been equipped with an efficient powersupply unit and battery to provide the required DC powerto all system parts and its peripherals. However, when themain power is on, the battery can be charged within nomore than 2 hours in the worst case; and can operate morethan 8 hours in full load operation with absence of themain power. The output DC power is 12W with differentoutput voltage levels (5 / 7.5 / 9 / 12) to support differentperipherals. Figure4 shows the power supply for theproposed system.

Fig. 4: System Power Supply.

2.2 The Main Proposed System Software

This section contains image processing stage andsurveillance algorithms stage. The image processing stage

c© 2018 NSPNatural Sciences Publishing Cor.

Page 5: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 349

explains the image processing algorithms and techniques,which represent the main core of the proposed system,where the results of this stage affect the decision-makingaccuracy in the other stages. At this stage, a set ofspecialized algorithms and tools have been adopted toperform image processing tasks. OpenCV library hasbeen selected to be the toolkit for developing this stagecoupled with powerful programming languages (C++,Python, and Java), which they are suitable for Linuxenvironment. The surveillance algorithms stage describesthe techniques that used by the system to do its job. Therequired actions of this stage depend on the outputs of theimage processing stage. This stage also represents theverification mechanism of the alerts, when the system isactivated as result of a specific event detected by one ofthe sensors. Therefore, without image processing, thedecisions may be inappropriate or wrong as a result ofsensors false alarm. Figure5 shows the sequence ofimage processing and surveillance algorithms stages forthe proposed system.

Fig. 5: System Main Algorithm.

First the image processing algorithm stage is explained,then the surveillance algorithms stage is illustrated, asfollow:

2.2.1 Image Processing Algorithm stage

As illustrated in Figure5, the main goal of imageprocessing is to visually detect intrusion, extract theobject, and specify the object boundaries. Therefore, toachieve this goal the following procedures must beperformed:

1.Frame Buffer Initialization: Before starting theimage processing, a frame buffer must be initializedto receive the source video stream via a USB videocapture card.

2.Frame Enhancement:A set of operations that dealwith image processing based on the changes in thescene of interest. These operations are an attempt toimprove the properties of the images stored in theframe buffer by removing noise, isolating individualelements and joining disparate elements in an image[21]. The most basic morphological operations areerosion, dilation, opening, and closing. Some of theseoperations tends to remove some foreground pixelsfrom the edges while the others, tend to enlarge theboundaries of foreground regions in an image. Theresultant of image enhancement gives someinformation that can be used to identify the framecolor range such as background color range,foreground color range, day and night.

3.Histogram Calculation: is a graphical representationof the frequency density of image pixel values. It isuseful for identifying the distribution of pixels in animage. For an 8-bit grayscale image, there are 256different possible intensities. Therefore, the histogramwill graphically display 256 numbers showing thedistribution of pixels among those grayscale values.To enhance the contrast of an image there are twomethods applied to the histogram by this stage. Thefirst one is called histogram stretching and the secondone is called histogram equalization. Histogramstretching (often-called normalization) is a techniquethat attempts to increase the contrast in an image bystretching the range of intensity values. Whilehistogram equalization is a technique for adjustingimage intensities to enhance contrast [22]. These twomethods are useful for increasing the target orbackground boundaries and the target or backgroundarea.

4.Background Cancellation: also known asbackground subtraction, is the main processing step inmany vision-based applications. It aims to extract theforeground (intrusion object) from the background[23]. This is done by calculating the differencebetween the current frame and a reference frame(keyframe), after a stage of image preprocessing suchas image de-noising and morphology process.Through this process, the target is designated.Therefore, background subtraction is a very importantpart in many computer vision application likesurveillance, object detection and Tracking [24].

c© 2018 NSPNatural Sciences Publishing Cor.

Page 6: Embedded Real-Time Video Surveillance System based on

350 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

5.Target Extraction: Based on the results from theprevious process where the background of the scenehas been eliminated, the target extraction stagedetermines the boundaries of the intrusion object(target) in terms of contour(s) that can be used tofigure out the target parameters: centre, width, andheight.

2.2.2 Surveillance Algorithms Stage

This stage contains a set of algorithms working togetherto achieve the objectives of the proposed system, thesealgorithms can be summarized as follows:

1.Alert Generation and Notification AlgorithmAfter target information (centre, width, and height)was specified by image processing algorithms, theprocess of alert generation and notification is started.At the same meaning, the procedures of this stagebegin when there is a target discovered in the previousstage (Image Processing Algorithm). In this stage thevideo broadcasting stage is in progress. Therefore,these procedures require initialized a centralizedcomputer placed in the Headquarters (HQ) with aspecific IP address for security and optimizationpurposes (where two or more users simultaneouslyaccess the system configuration). Through thiscomputer, the system will be controlled remotely,such as the system setup and surveillance modesselection (Manually, Patrol and Sensor Event). Inaddition, the system is capable of reporting theexistence of an event (hazard) by sending SMS,Email, and even a phone call to the HQ and the groupof persons in charge. As illustrated in Figure6, thealert generation and notification stage includes thefollowing:

i.Getting the HQ Address: To get the informationneeded by the system (SWVSB) in order tocommunicate with HQ such as the website (URL),Email address and phone number. Such proceduresare done in the System Setup stage.

ii.Get the address of person in-charge:This step isused to get the information that the system needsin order to warn persons in-charge like Emailaddress and phone numbers. Persons in-charge arethe people who have the ability to make decisionsor direct technical or security teams remotely.

iii. Stream Broadcasting: This is done after thedetection of a new event and verification of thepresence of an intrusion by multi-sensor boxessupported by the image processing techniques thatwere carried out in the previous stages. The videobroadcasting stage is started via web hosting.

iv.Alert Message construction: At this stage, thealarm messages are built and configured to send inthe format of SMS and Email form.

Fig. 6: Alert Generation and Notification Algorithm.

v.Alert Notification: This stage is synchronizedwith the video broadcasting stage. Through thisstage, the notifications are sent via SMS andEmail. In addition to making a phone call. Thephone call is used for alert verification purposesonly. Once a connection is established, the call isterminated. This process can be repeated morethan once. After that, the system waits for ACK inorder to disable the alert.

2. Detection and Tracking AlgorithmA set of actions must be taken for the purpose oftarget detection and tracking. These actions are doneafter frames gathering process. Some of these actionsrelate to image processing, like image enhancement,histogram calculation, frame subtraction, and findingcontours, etc. other actions include video capturing,tracking window, servo system, stream construction,broadcasting, and sending alert notifications. The

c© 2018 NSPNatural Sciences Publishing Cor.

Page 7: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 351

tracking process is divided into two stages (trackingwindow and servo system control). The first stage isconcerned with the window that identifies the movingobject. On the other hand, the second stage isconcerned with the servo system, which is responsiblefor controlling the camera movement(Azimuth/Elevation and Zoom/Focus) to ensure goodtracking. Figure7 shows the actions of the targetdetection and tracking process for the proposedsystem which include:

Fig. 7: Target Detection and Tracking Algorithm.

i.Color Conversion: The captured frames are inRGB format; these take a long time to process andlots of memory for buffering in the various stagesof image processing, target detection, and tracking.Therefore, there is a need to convert this format toa grayscale format. This procedure will facilitateand accelerate the arithmetic operations for imageprocessing. Where it will be dealing with one colorcomponent (intensity) instead of three components(RGB). Therefore, it takes less computationaltime.

ii.The Image Processing Part of the TrackingProcess:

a.If the received frame is a keyframe then:•.Enhance the frame by applying some image

processing tools (mask filter) such as erode anddilate.

•.Subtract the current keyframe from the previouskeyframe to produce the keymask. The keymaskrepresents the template (mask) in bi-color toindicate the target region with white andbackground with black to facilitate the in scenetarget search algorithm. The frame will bescanned from four directions to determine themargin of the target and to calculate the centroid.This process is done once every second.

b.Otherwise (not keyframe):•.Calculate the histogram (stretching,

equalization).•.Do thresholding and segmentation: thresholding

is the simplest method of image segmentation.From a grayscale image, thresholding can beused to create binary images. While, imagesegmentation is the process of partitioning adigital image into multiple segments. It is used tolocate objects and boundaries in an image.

•.Generate a new mask (binary image): a binaryimage is a digital image that consists of twocolors, typically black and white. The color usedfor the object is the foreground color while therest of the image is the background color. Thegenerated new mask is more recently updatedthan the keymask. Because the new mask iscalculated for each frame every 40 msec whilethe keymask is calculated every one second.

•.New mask enhancement: the purpose of imageenhancement is to clarify the resultant image(new mask) and remove appendages by applyingerode and dilate mask filters.

iii. Tracking Process: the process of comparing thenew mask with the key mask, leads to theextraction of the target contour. Where, the contouris used to calculate the target coordinates (centre,width, and height). After that, the target trackingstage is initiated by drawing a quadrilateral shapesurrounding the target (tracking window). Thiswindow follows up the target’s movements and hasthe same target centre. Simultaneously, the servosystem controller is continuously updated with thenew target coordinates in order to control thecamera movement toward the target and to keepthe tracking window in the middle of themonitoring screen periodically. The servo systemis responsible for the overall camera movement toensure good tracking and to keep the trackedobject within the camera’s Field of View (FoV).The stepper motor driver is responsible for theAzimuth and Elevation control directions. The DCmotor driver is responsible for adjusting the zoomlevel and focusing control. Continuous correctionof coordinates and periodic updating of servo

c© 2018 NSPNatural Sciences Publishing Cor.

Page 8: Embedded Real-Time Video Surveillance System based on

352 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

direction will keep the target (intruder) within thetracking zone. Therefore, a series of processesmust be applied by each frame time. Figure8illustrates the procedures of the servo systemcontrol stage.

Fig. 8: Servo System Control Algorithm.

These processes can be summarized as:a.Calculating the new location for the tracking

window (center, width, and height), which iscalculated for each frame during the imageprocessing stage (finding the target contours).

b.Updating the servo system with the newinformation (Azimuth/Elevation andZoom/Focus) and taking into consideration theprevious values stored in the accumulated buffer.The accumulated buffer always retains the lastten readings and calculates their average(Moving Window Averaging Filter). Thisprocedure helps to make the camera smoothwhen it moves (without hops).

c.Calculating the error rate by finding thedifference between the azimuth/elevation’sdesired angles and azimuth/elevation’s actualangles.

d.Correcting the coordinates according to theresulting error rate and the previous value in the

accumulated buffer. Consequently, thisprocedure is necessary to avoid the accumulationof error. Then, the correction values are returnedto the accumulator as a feedback beforechecking the window position again.

3.Stream Construction and Video BroadcastingAlgorithmsIn this stage, the video stream construction process isstarted. This process includes the construction of anew video stream composed of a sequence of framesstarted by a (Keyframe) with a fixed size of (640 x480) pixels, and 24 sub-frames (sub-images) thatcontain only the target (intruder), with an average size(no more than 20% of the full frame size). 1 keyframeand 24 sub-frames will be constructed using the JPEGencoding technique in order to build the video streampacket and prepare it for the broadcasting stage. Notethat, each header of the constructed sub-framescontains the target image specifications, including thetarget image’s coordinates which are overlaid on thekeyframe in their specific location. Then, the videostream will be broadcast on the website via the webhost (Apache web server). Based on the results fromthe previous stages (motion detection and tracking),or upon the system setup and/or operator request; theprocess of video broadcasting is started. In otherwords, the broadcasting stage begins when the systemdetects an intrusion event or an operator initiates anew monitoring task (scheduled patrol or onlinemonitoring). This process requires the preparation ofa video stream buffer that collects frames (1keyframe+ 24sub-images for each second). Where, on eachkeyframe, the key buffer will be erased from thepreviously overlapped frames and prepare to be readyfor MPEG4 encoding techniques. While eachsub-frame (sub-image) contains the target informationonly; the sub-image frame is encoded and the frameheader is created with the MPEG4 encoder. MovingPicture Experts Group (MPEG4) is a standard forencoding moving pictures. It describes a set ofvideo/audio lossy compression methods, which allowthe storage and transfer of movies using the availablemedia storage and transmission bandwidth [25]. Afterthat, the generated packets from the encoding processare collected and compressed to be ready for sendingvia Ethernet. When the packets arrive at thedestination, the client’s browser extracts the receivedpackets (1keyframe + 24sub-images) and the systemreadout (sensors, system status), updates the HTMLresponse to its request time. The display process takesplace by caching the keyframe and overlaying eachsub-image in its location on the keyframe. Figure9shows the process of video broadcasting for theproposed system and Figure10 illustrate thebroadcasting mechanism used in the proposed system(1keyframe and 24sub-images). The web serverdeploys a video broadcasting that supports the

c© 2018 NSPNatural Sciences Publishing Cor.

Page 9: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 353

MPEG4/JPEG streaming with some pivotcharacteristics that will enrich the display by offeringstream cache and frame overlay features.

Fig. 9: Video Broadcasting Algorithm.

4.Web HostingThe availability of web browsers on various types ofplatforms (Computers, Mobile phones, and tablets)armed with several types of operating systems andprocessing power, encourages toward adopting theweb solution as a GUI for mastering the SWVSB. TheRaspberry PI B+ with its ARM processor is capableof hosting a dedicated web server for such a task andcan serve as a monitoring and video streaming serverin real time. Therefore, a lightweight version ofApache web server (Light http) is used to exploit itscapabilities. This version provides a way of setting upa web server without putting too much load on thelimited processing capability. It is ideal for providingweb access to the Raspberry Pi as a monitoring tool,

Fig. 10: The Broadcasting Mechanism .

or as a lightweight web server for a distinct website.Figure 11 shows the process of web hosting andmanagement stage.

Fig. 11: Web Hosting and Management Stage.

As illustrated in Figure11, the Apache web server isan automated software that serves content in responseto requests. The proposed system is managed throughweb application written in python language. Where,the Python is a powerful scripting language suitablefor the smallest prototyping and largest projects. Theidea behind the establishment of Python webapplication is that the Python code can be used fordetermining the content that is shown to the user andwhat action to take. In fact, this code (Python) runsvia a web server that hosts the website, therefore theuser can use the application without installinganything except the browser [26]. The Python WebApplication deals with the system interface(multi-sensor box, surveillance camera, and servo

c© 2018 NSPNatural Sciences Publishing Cor.

Page 10: Embedded Real-Time Video Surveillance System based on

354 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

system) and other procedures that include, imageprocessing, the detection and tracking process, streamconstruction, alert generation, and video broadcastingfrom one side and HTML rendering from anotherside. Where, HTML rendering is the process ofdisplaying the requested content on the browserscreen. Then, the host forwards the request to theserver software (Apache) via a specified port andwaits for the client’s ACK (confirmation) in order toturn off the processes. Finally, the proposed systemruns across the system interface (web page). Theobserver can use any web browser to display thesystem web page (http://SWVSB.box) via anyterminal or smart phone. The system web page has afriendly Graphical User Interface (GUI) hosted onRaspberry Pi as a web server. Through this interface,the system is managed and watched operating statusessuch as system parameters info (power supply status,LAN status, Wi-Fi status, and GPS status), videostreaming display and control info (display thebroadcasting video, alert status, date, time, control thecamera movement in manual/patrol mode, and resetthe alert), remote multi-sensor box info (sensors statusof POE and Wi-Fi multi-sensor box), GSM networkinfo (GSM status, phone numbers, and messagestatus), and system setup. The system setup web pageis used to determined sensors setting (on/off),determined the persons in-charge numbers, set theappropriate warning messages, set network settings,and display the sensors log information. Figure12shows the proposed SWVSB web page systeminterface, while Figure13 shows the setup web pageof the proposed system.

Fig. 12: The Proposed System Web Page Interface.

5.System Status Update AlgorithmThe proposed system deals with two types of users,HQ and persons in-charge, to administrate the systemand take the necessary actions. HQ is an authorizeduser that is able to access the system setup through thesystem homepage via a specific IP address and make

Fig. 13: The Proposed Setup Web Page.

the required modifications (select monitoring modes,on/off sensors, and ACK). The person in-charge is theuser that receives the alert messages and reporting.Note that, the authorization to access the system setupcan be given to this user (person in-charge).Therefore, the system must respond to these ordersand update the system status. The System status canbe grouped into five categories:a.System Parameters:updating the system status

parameters (power supply, security, and networkstatus) will keep the system from any faults.Therefore, updating the system settings will helps tokeep the system in an up-to-date state.

b.Video Streaming and Control: in any mode ofoperation (Manual, patrol and sensor event), thevideo stream will be initiated to alert the observerabout the new situation in the area of interest.

c.Remote Multi-Sensor Box: by exploiting theinformation of the multi-sensor boxes to estimatethe in-field environmental conditions and update thesensors readout on the website.

d.GSM Network: updating the GSM network statusfor the network connection monitoring with GSMservice provider.

e.System Setup:updating the system setup web pagemakes the observer able to review the sensor’sreadout and changes the system settings.

These five categories will be updated in the form ofRequest/Response protocols and systemevent/acknowledgment across the web server and webapplications to update the HTML page. Figure14illustrates the mechanism of system status updatingfor the proposed system.Finally, Figure15 (a and b) shows the overall systemdesign. The proposed system is protected by a shieldbox type IP65 with the dimensions of 300x400x165mm; so the all electronic component is protectedagainst various environmental conditions such aswater, dust, and humidity. Moreover, the tamperingand sudden shocks.

c© 2018 NSPNatural Sciences Publishing Cor.

Page 11: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 355

Fig. 14: System Status Update Algorithm.

3 Proposed System Hardware and SoftwareAchieved Features

The designed and implemented of the proposed systemhardware and software achieved the following features:-

1.It is embedded real-time video surveillance system.2.The process to control the SWVSB, POE sensors box,

and Wi-Fi sensors box are done remotely through awireless network and web interface (simple GUI)works via any web browser without extra software ora dedicated control room. Which make the systemeasy (friendly) to use, and easy to configure (setup,management).

3.Set of sensors integrated with video camera providehigh accuracy to detect intrusion which result indecreasing the false alarms and gives true alert.Therefore, the proposed system can help minimizehuman errors, and assist in avoiding serious accidents.

4.The proposed surveillance system can be designedand implemented using multiple types of videocameras such as CCTV, IP, and webcam.

5.The camera Pan in Azimuth is about (-90 to +90) andthe camera Tilt in Elevation is about (-15 to + 55 ). Thisdesign enable the system to track the event with wideangle.

Fig. 15: The Proposed Overall System Design; (a) The ProposedSWVSB Main Components, (b) The Proposed System DesignAfter Packing.

6.The smartness and flexibility of the system lead todetect and track unwanted events within the area ofinterest using three surveillance modes (Manual,Patrol, and Sensors Events).

7.The system can be easily installed (not complicated),cost effective (low cost), and resistant toenvironmental hazard. The RPI can work in harshenvironment with fan-less CPU. These features helpthe system to be spread in order to monitor large area.

8.Efficient video transmission with frame characteristicsreduction and SWVSB synchronization help thesystem to maintain the available bandwidth. Since theproposed system takes into account the bandwidthconsumption for each surveillance node. Therefore,replicating nodes to cover a large monitoring zone isone of the priorities of the system.

9.Standard video stream broadcasting for more viewercompatibility, hence MPEG 4 encoding is used asvideo format.

10.Reliable notifications (SMS, ringing, Email andcaptured video stream).

11.Smart power consumption because the system is wakeon alert.

c© 2018 NSPNatural Sciences Publishing Cor.

Page 12: Embedded Real-Time Video Surveillance System based on

356 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

12.The proposed system has reasonable dimensions (300x 400 x 165 mm) and Lightweight (1.5 Kg). Therefore,it can be placed on any column simply.

4 Result and Discussion

In this section, the performance evaluation of theproposed system algorithms is presented in this section inorder to show the advantages of the proposed system andthe efficiency of the algorithms used. These results aredivided into four categories: frame size comparison(received and sent), bandwidth utilization, bandwidthefficiency, and system processing time.

4.1 Frame size comparison

The source data size (captured) is compared to the outputdata size (sent) as shown in Figure16. As illustrated in

Fig. 16: Frame Size Comparison.

Figure16, the average JPEG frames size is about 95 KBfor the source frames, while the average JPEG outputframes size is about 8 KB. The frame size reduction in theoutput frames of the proposed system is due to theycontain subimages that hold only the target (intruder)which is truncated from the full frame (source) based onthe use of image processing and object detectiontechniques. In addition, the output JPEG frames provideabout average 91% of the data size compared to thesource frames. Moreover, the source JPEG framesconsumed about 122 MB of storage capacity on diskwithout image processing, while the proposed systemconsumes about 25.9 MB of storage capacity. However,Figure 16 illustrates the reduction in size between thesource frames and output frames.

4.2 Bandwidth utilization

The bandwidth allocation comparison between thebroadcasting of full frames and output frames (subimage)

Fig. 17: Bandwidth Allocation Comparison.

for the proposed system is shown in Figure17.Figure17, illustrate the amount of bandwidth consumedwhen transmitted full frames colored without theprocessing techniques and output frames for the proposedsystem. Where the full frames consumed about 175Mbit/sec while the output frames consumed about anaverage of 13 Mbit/sec. Since the proposed system sendsthe first frame (keyframe) as a full frame with 640 x 480pixels every second. While the remaining 24 frames aresent as subimages with variable frame sizes based onintruder size.

4.3 Bandwidth efficiency

The bandwidth efficiency indicates how efficiently theavailable bandwidth is utilized. Figure18 illustrates thebandwidth efficiency delineation of the proposed systemwith respect to the conventional systems (withoutprocessing). It is clear from Figure18, that the stream

Fig. 18: Bandwidth Efficiency Comparison.

bandwidth efficiency of the proposed system, reaches onaverage 92% when sending 24 variably sized frames(subimages) + 1 keyframe every second. Where thezigzag represents the efficiency of the subimages

c© 2018 NSPNatural Sciences Publishing Cor.

Page 13: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 357

transmission, while the decline pulses represent thetransmission efficiency of the keyframes. This declineoccurs once every second as a result of sending keyframeswithout processing.

4.4 System Processing Time

This paragraph illustrates the processing time for eachtask and the responsiveness of the proposed system towork in real-time as shown in Figure19. The figure

Fig. 19: System Processing Time.

shows the percentage image processing time per task persecond of the proposed system. It is noted that the timetaken by the proposed system to process the data received(according to algorithms used), does not exceed (68%)the time specified to complete all the task assigned to thesystem in the worst case as shown in the top curve(broadcasting). This means that the system can operatewith real-time capabilities. In the same manner Table1shows the minimum and maximum time spent by eachtask individually.

Table 1: Min and Max Task Processing Time.Task Name Min-Max Task Time in (ms)

Source (input) 1-4Grayscale 2-3Histogram 4-5

Mask 2Target 6-9Frame 1-3Stream 1-3

Broadcasting 2-3

This table contains the minimum and maximum timerequired by each task. If the maximum time is considered,then the system needs 32 msec in the worst case to finishall the image processing and broadcasting tasks. This alsoconfirms that the proposed system works in real-time.

5 Conclusion

This paper presents the design and implementation of asmart embedded surveillance system capable of detecting,identifying, tracking and recording unwanted eventswithin the area of interest in real-time, under lowcomputing resources constraints. The system is able toverify the alerts coming from multi-sensor box visuallybefore sending the alert messages and broadcast videowith less bandwidth consumption. Consequently, this willincrease the system’s efficiency, reliability, accuracy, andelimination of false alarms. It was also noted that thesystem that is designed achieves a reduction in storagecapacity (on disk) to an average of 78% compared to thesource data size without processing. In addition, thesystem processing time did not exceed the time specifiedfor each frame (40 msec) from the video source to thebroadcasting process. Therefore, the system maintainedthe work within real-time at all stages. Finally, this systemis characterized by being flexible, portable, easy to install,expandable and cost-effective. Therefore, it can beconsidered as an efficient technology that used in differentmonitoring purposes like border areas, ports, governmentbuildings, and important facilities. For the future works,the work can be extended by adding a central server witha database management system for data logging,updating, and analyzing. Where the database managementsystem enables the data coming from multi-sensor boxesto be stored and query them by the client to improve thealertness of the decision making criteria. Moreover,improving the entire system performance through amechanism that uses Cloud-managed and Internet ofThings (IoT) for simplified the remote monitoringprocesses can also be carried out.

References

[1] Solanki, K., Chaudhary, B. & Durvesh, A. Wireless RealTime Video Surveillance System Based On Embedded WebServer and ARM9. Int. J. Adv. Res. Eng. Technol.2, 19-23(2014).

[2] Robert, C. et al. A System for Video Surveillance andMonitoring. (2000).

[3] Mandrupkar, T., Kumari, M. & Mane, R. Smart VideoSecurity Surveillance with Mobile Remote Control. Int. J. Adv.Res. Comput. Sci. Softw. Eng.3, 352-356 (2013).

[4] Imran, M. Investigation of Architectures for Wireless VisualSensor Nodes. (Licentiate Thesis, Mid Sweden University,2011).

[5] Chen, W., Shih, C.-C. & Hwang, L.-J. The Developmentand Applications of the Remote Real-Time Video SurveillanceSystem. Tamkang J. Sci. Eng.13, 215-225 (2010).

[6] Martinel, N. On a distributed video surveillance systemtotrack persons in camera networks. Published by ComputerVision Center / Universitat Aut ‘onoma de Barcelona,Barcelona, Spain14, (PhD. Thesis, University of Udine, 2015).

[7] Sutor, S. Large-scale high-performance video surveillance.(PhD. Thesis, University of Oulu, Faculty of InformationTechnology and Electrical Engineering, 2014).

c© 2018 NSPNatural Sciences Publishing Cor.

Page 14: Embedded Real-Time Video Surveillance System based on

358 L. Fawzi et al.: Embedded real-time Video Surveillance System based...

[8] Patel, P. B., Choksi, V. M., Jadhav, S. & Potdar, M. B. SmartMotion Detection System using Raspberry Pi. Int. J. Appl. Inf.Syst.10, 37-40 (2016).

[9] Sumalatha, G. & Bharathiraja, S. Implementation of RealTime Video Streamer System in Cloud. Int. J. Eng. Appl. Sci.3, 67-70 (2016).

[10] Vaidya, R. R. Efficient Embedded Surveillance System WithAuto Image Capturing and Email Sending Facility. Int. J. Tech.Res. Appl.3, 109-112 (2015).

[11] Arathi, K. & Pillai, A. Low-Power Home EmbeddedSurveillance System Using Image Processing Techniques. inICAIECES, 377-389 (Springer, New Delhi, 2016).

[12] Myrala, N. & Vijaya, K. AUTOMATIC SURVEILLANCESYSTEM USING RASPBERRY PI AND ARDUINO.IJESRT3, 635-640 (2017).

[13] Gervacio, M., Esteves, A., Tamondong, R. & Faustino, X.GSM Based Home Embedded Surveillance System UtilizingPyroelectric Infrared Sensor. in Cebu International Conferenceon Computers, Electrical and Electronics Engineering 144-147(Technological Institute of the Philippines, Philippines., 2017).

[14] Raspberry Pi. Raspberry pi foundation Available at:https://www.raspberrypi.org/products/.

[15] Arduino. Arduino Mega 2560. (2017). Available at:https://www.arduino.cc/en/Main/arduinoBoardMega.

[16] CamScan. Security SurveillanceSolutions. 1-102 (2012). Available at:http://www.camscan.ca/productdownload.php?filename=CAMSCAN Product Catalogue - 2015 27MB.pdf.

[17] SIMCom. SIM800LHardwareDesignV1.00. 1-70 (2013).doi:SIM800L HardwareDesignV1.00

[18] Lereno, G. Raspberry Drone: Unmanned Aerial Vehicle (UAV ). (MSc. Thesis, Technical University of Lisbon, 2015).

[19] Abdelrahman, R., Mustafa, A. & Osman, A. A Comparisonbetween IEEE 802.11a, b, g, n and ac Standards. IOSR J.Comput. Eng. Ver. III17, 26-29 (2015).

[20] Lady, A. Adafruit Ultimate GPS. AdafruitIndustries 1-38 (2017). Available at: https://cdn-learn.adafruit.com/downloads/pdf/adafruit-ultimate-gps.pdf.

[21] OpenCV. OpenCV 2.4.13.2 documentation.OpenCV Dev Team (2014). Available at:http://docs.opencv.org/2.4/genindex.html.

[22] Tutorialspoint. Digital Image Processing. (2017). Availableat: http://www.tutorialspoint.com/dip/.

[23] Doxygen. Background Subtraction.OpenCV (2017). Available at:http://docs.opencv.org/trunk/db/d5c/tutorialpy bg subtraction.html.

[24] Manikandan, R. & Ramakrishnan, R. Human ObjectDetection and Tracking using Background Subtractionfor Sports Applications. Int. J. Adv. Res. Comput.Commun. Eng.2, 4077-4080 (2013).

[25] Rump, N. MPEG-2 Video. MPEG-2Copyright Identifier (2006). Available at:http://mpeg.chiariglione.org/standards/mpeg-2/video.

[26] Kenneth Reitz. Web Applications & Frameworks.(2016). Available at: http://docs.python-guide.org/en/latest/scenarios/web/.

[27] C. Bianca and L. Fermo, Computers & Mathematicswith Applications61, 277-288 (2011).

Laith M. Fawzi receivedthe B.Sc. degree in Electricaland Computer Engineeringfrom Military EngineeringCollege, Iraq, in 1999; andthe M.Sc. degree in ComputerEngineering from Universityof Technology, Iraq, in2005. He worked as a lecturerin Al-Rafidain University

College in Iraq until 2011, in addition to his work inthe Ministry of Science and Technology (MOST),Iraq till now. His researches interest in the area ofnetworks design. He has an experience in networking,communication and information technology (IT).

Siddeeq Y. Ameenreceived BSc in Electricaland Electronics Engineeringin 1983 from Universityof Technology, Baghdad.Next, he was awarded theMSc and PhD degree fromLoughborough University,UK, respectively in 1986 and1990 in the field of Digital

Communication Systems and Data Communication. From1990 - 2006, Professor Siddeeq worked with theUniversity of Technology in Baghdad with participationin most of Baghdad’s universities. From Feb. 2006 to July2011 he was a Dean of Engineering College at the GulfUniversity in Bahrain. From Oct. 2011 - Sep. 2015 hejoined University of Mosul, College of ElectronicEngineering a Professor of Data Communication.From Sep. 2015, he Through his academic life hepublished over 100 papers and a patent in the field of datacommunication, computer networking and informationsecurity and supervised over 100 PhD and MSc researchstudents. He won the first and second best researchin Information Security by the Arab UniversitiesAssociation in 2003. Finally, from Sep. 2015 - Sep. 2017,he served as a Dean of research and graduate studies atApplied Science University in Bahrain and awarded theHigher Education Academy Fellowship.

c© 2018 NSPNatural Sciences Publishing Cor.

Page 15: Embedded Real-Time Video Surveillance System based on

Appl. Math. Inf. Sci.12, No. 2, 345-359 (2018) /www.naturalspublishing.com/Journals.asp 359

Salih M. Al-Qaraawirreceived the B.Sc.degree in Electrical andElectronics Engineeringfrom the University ofTechnology, Baghdad, in1977. Next, he was awardedthe M.Sc. degree in ComputerEngineering in 1980from Control and Systems

Engineering, University of Technology, Baghdad thenPh.D. degree in Computer Engineering in 1994 fromUniversity of Technology, Gdansk, Poland in the field ofFault Diagnosis and Reliability of computer networks.Professor Salih worked in University of Technology,Baghdad since April 1983- present. He was Dean Assistof Control and Systems Engineering from 1996-2003 and2006-2012 then the Dean of Computer Engineering,University of Technology since 2013 - present. Hepublished about 30 papers in the field of computernetworks, reliability, microcontroller’s applications,anddata communication and network security. He supervisedover 10 Ph.D. and 33 M.Sc. students.

Shefa A. Dawwdreceived the B.Sc.degree in Electronicand CommunicationEngineering, the M.Sc. andthe Ph.D. degree in ComputerEngineering in 1991, 2000,and 2006, respectively. Heis presently a faculty member(Associate Professor) in the

Computer Engineering Department / University of Mosul.His main research interests include image & signalprocessing and their hardware models, parallel computerarchitecture, hardware implementation and GPU basedsystems. He has authored more than 29 research papers.He has been an editorial member of several national andinternational journals.

c© 2018 NSPNatural Sciences Publishing Cor.