lgurjcsit issn:2519-7991 · prof dr. aftab ahmad malik 1 algorithm for coding person’s names in...

91

Upload: others

Post on 25-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce
Page 2: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT ISSN:2519-7991

SCOPE OF THE JOURNAL The LGURJCSIT is an innovative forum for researchers, scientists and

engineers in all domains of computer science and technology to publish high

quality, refereed papers. The journal offers articles, survey and review from

experts in the field, enhancing insight and understanding of the current trends

and state of the art modern technology.

Coverage of the journal includes algorithm and computational complexity,

distributed and grid computing, computer architecture and high performance,

data communication and networks, pattern recognition and image processing,

artificial intelligence, cloud computing, VHDL along with emerging domains

like Quantum Computing, IoT, Data Sciences, Cognitive Sciences, Vehicular

Automation. Subjective regime is not limited to aforementioned areas, Journal

policy is to welcome emerging research trends in the general domain of

computer science and technology.

SUBMISSION OF ARTICLES

We invite articles with high quality research for publication in all areas of

engineering, science and technology. All the manuscripts submitted for

publication are first peer reviewed to make sure they are original, relevant and

readable. Manuscripts should be submitted via email only.

To submit manuscripts by email with attach file is strongly encouraged,

provided that the text, tables, and figures are included in a single

Microsoft Word/Pdf file. Submission guidelines along with official

format is available on the following link; www.research.lgu.edu.pk

Contact: For all inquiries, regarding call for papers, submission of research articles and correspondence,

kindly contact at this address:

LGURJCSIT, Sector C, DHA Phase-VI Lahore, Pakistan

Phone: +92- 042-37181823

Email: [email protected]

Copyright @ 2017, Lahore Garrison University, Lahore, Pakistan. All rights reserved.

Published by: Faculty of Computer Science, Lahore Garrison University

Page 3: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume 1 Issue 1 January-March 2017

Papers

Prof Dr. Aftab Ahmad Malik 1

Algorithm for Coding Person’s Names in large Databases / Data

Warehouses to Enhance Processing Speed, Efficiency and Reduce

Storage Requirements.

Taimoor Hassan, Shoaib Hassan 13

Analyzing and Resolving Issues in Software Project Risk Management.

Syeda Binish Zahra 22

Algorithm and Technique for Animation.

Shazia Saqib 37

Effects of Mobile Phone Radiation on Human Health.

Tahir Alyas, Nadia Tabassum, Umer Farooq 44

A Quantum Optimization Model for Dynamic Resource

Allocation in cloud computing.

Sadia Batool, Mohtishim Siddique 54

Energy Efficient Schemes for Wireless Sensor Network (WSN)

Mirza Shahwar Haseeb, Rana Muhammad Bilal Ayub, 62

Muhammad Nadeem Ali and Muhammad Adnan Khan

Face and Face Parts Detection in Image Processing

Laraib Kanwal, Muhammad Usman Shahid 69

Denoising of 3D magnetic resonance images using non-local PCA and

Transform-Domain Filter

Page 4: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Patron in Chief: Major General (R) Obaid Bin Zakria, Lahore Garrison University

ADVISORY BOARD

Major General (R) Obaid Bin Zakria, Lahore Garrison University, Lahore, Pakistan

Dr. Aasia Khanam, Forman Christian College Lahore, Pakistan

Dr. Asad Raza Kazmi, GCU, Lahore, Pakistan

Dr. Wajahat M. Qazi, COMSATS, Lahore, Pakstan

Dr. Rehan Akbar University Tunku Abdul Rahman (UTAR) Malaysia

Dr. Sagheer Abbas, NCBA&E, Lahore, Pakistan

Dr. Haider Abbas NUST, Rawalpindi, Pakistan

Dr. Atifa Athar, COMSATS, Lahore, Pakistan

Dr. Shahzad Asif, UET, Lahore, Pakistan

Col (R). Sohail Ajmal, Director QEC, Lahore Garrison University, Lahore, Pakistan

EDITORAL BOARD

Prof. Dr. Shahid Raza University of South Asia, Lahore, Pakistan

Mr. Tahir Alyas Lahore Garrison University, Lahore, Pakistan

Mr. Adnan Khan Lahore Garrison University, Lahore, Pakistan

Ms. Sadia Kausar Lahore Garrison University, Lahore, Pakistan

Ms. Binish Zahra Lahore Garrison University, Lahore, Pakistan

Mr. Waqar Azeem Lahore Garrison University, Lahore, Pakistan

Mr. Nadeem Ali Lahore Garrison University, Lahore, Pakistan

Chief Editor

Ms. Shazia Saqib , Lahore Garrison University, Lahore, Pakistan

Assistant Editor

Mr. Umer Farooq Lahore Garrison University, Lahore, Pakistan

Correspondence All manuscripts should be sent to editor on email ID:

[email protected]

Page 5: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

REVIEWERS COMMITTEE

Dr. Yousaf Saeed University of Haripur, Haripur, Pakistan.

Dr. Sultan Ullah University of Haripur, Haripur, Pakistan.

Dr. Shahzad Asif University of Engineering and Technology, Lahore,

Pakistan.

Dr. M.Abuzar Fahiem Lahore College for Women University, Lahore, Pakistan.

Dr. Atifa Authar COMSATS Institute of Information Technology, Lahore,

Pakistan.

Dr. Asim Qurtuba University of Science and Technology, Peshawar,

Pakistan.

Dr. Sharifullah Khan School of Electrical Engineering and Computer Science.

National University of Sciences and Technology (NUST).

Dr. Kashif Zafar National University of Computer & Emerging Sciences

Lahore, Pakistan.

Dr. M. Aamer Saleem Ch. Hamdard University, Islamabad, Pakistan.

Dr. Tahir Naseem International Islamic University, Islamabad, Pakistan.

Dr. Umer Javeed International Islamic University, Islamabad, Pakistan.

Dr. Sajjad Ahmed Ghori International Islamic University, Islamabad, Pakistan.

Dr. A.N. Malik International Islamic University, Islamabad, Pakistan.

Dr. Haider Abbas Military College of Signals (MCS), NUST, Rawalpindi,

Pakistan.

Dr. I. M. Qureshi Air University, Islamabad, Pakistan.

Dr. T.A. Cheema ISRA University, Islamabad, Pakistan.

Mr. M. Anwaar Saeed Virtual University of Pakistan, Pakistan.

Mr. Adnan Aziz ISRA University, Islamabad, Pakistan.

Mr. Natash Ali Main Beacon House National University, Lahore, Pakistan.

Mr. Amir Haider COMSATS Institute of Information Technology,

Abbottabad, Pakistan.

Mr. M.Umair University of Central Punjab, Lahore, Pakistan.

Mr. Muhammad Ahmad COMSATS Institute of Information Technology, Sahiwal,

Pakistan.

Mr. Shahid Naseem UCEST, Lahore Leeds University, Lahore, Pakistan.

Mr. Shahid Fareed Bahauddin Zakariya University, Multan, Pakistan.

Page 6: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce
Page 7: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp. 1-12

1

Algorithm for Coding Person’s Names in large Databases

/ Data Warehouses to Enhance Processing Speed,

Efficiency and Reduce Storage Requirements

1Aftab Ahmad Malik

Abstract-: A technique to codify the names of people in Databases and

Data warehouses to reduce the storage requirements and enhance

processing speed is presented. It is estimated that the storage requirement

can be reduced, which is normally dominated by the entries of first name,

middle name and last name and storage requirements for Data

warehouses keep on increasing. The scheme of the Algorithm replaces

the first name, middle name and last name in numeric charterers instead

of alpha characters for storage. It decodes them into actual Alpha

Characters at the time retrieval. The step by step analysis and working

of the algorithm is presented.

Keywords: - Alpha-characters, Coding, decoding, names, numeric

characters

—————————— ——————————

1. INTRODUCTION

It is normal practice to assign codes to countries, cities, airports, flights,

trains, organizations, departments, roads, houses, streets, students roll

numbers, and books like ISBN, research journals, publishers and many

other objects in daily life. For example, all States of US and all cities of

Prof. Dr. Aftab Ahmad Malik

Department of Computer Science

Lahore Garrison University

Lahore, Pakistan

[email protected]

Page 8: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Aftab Ahmad Malik

2

UK have unique codes. Moreover, income tax number, passport,

Customer identity numbers and international telephone dialing codes are

extensively used in Database Management applications.

[1] Discusses name entity problem and evaluation and [3] presents

Layered space-time codes have been designed to use with a decoding

algorithm based on QR decomposition. [4], [6] provide description of

Oracle decode key word and decode function, which codes and decodes

the relevant field entries of a database, effectively. The key word decodes

works on "if-then" logic and provides result based on the based on the

value of a column or function. According to [4][6] and Oracle Complete

Reference Series the syntax is as follows:

Select...DECODE (Value, If1, Then1, [If 2, Then 2, ...,] Else)

...From ...;

Example: Select Distinct City, DECODE (City, 'London', '111', 'New

York', '367', 'Chicago', '982', City) AS code from Cities;

The 'City' in the first argument represents the column name.

Output:

City code

------------ -------

London 111

New York 367

Chicago 982

Example; SELECT name_id, Decode (idcard_id, 111,’fname, 347,

mname, 789,’lame’) from name_code.

Further, [2] Exhibits the working of decode function as follows:

Decode (expression, search, result [, search, result] …..[, default])

The Syntax of Oracle /PLSQL decode function is simple and performs

its working in an IF-then –else statement which works with Oracle

versions 9i,10g,11g and 12 c.

[5] Proposes a decoder which is based on the min-sum decoder for binary

LDPC codes called “Extended min-sum (EMS)”.

[8] And [9] describe Pakistan’s National Database & Registration

Authority (NADRA), equipped with one of largest fully integrated Data

Page 9: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance

Processing Speed, Efficiency and Reduce Storage Requirements

3

warehouse of the world. It facilitates issuance National Identity cards

and passports for citizens apart from authentication and verification of

information regarding facial images and fingerprints. According to [8],

it is equipped with World Largest Facial Library of 47 million images.

This Data warehouse [8] is very useful having 90 Million citizen

registered including 30 million children; more than 56 million id-cards

issued to citizens. According to [9], its processing speed 18 trillion

Instructions per sec and storage capacity is 60 Terabyte. [10] Suggests

the formation of code –decode tables, but there are a few disadvantages

such as we are unable to enforce referential integrity (foreign Key).

Definition: The number of (machine) instructions which a program

executes during its running time is called its time-complexity, while the

number of memory cell required in an Algorithm is Space Complexity.

Better Time Complexity ensures the faster running of Algorithms. On

the other hand, a good Algorithm must have small Space Complexity. It

is well established fact that the Time Complexity and Space Complexity

of an Algorithm have a trade-off and though are different aspects of

determining the efficiency of an Algorithm. Certainly time Complexity

changes with a change in size FIRST NAME (326) MIDDLE NAME

(111) and LAST NAME (231) of the input.

2. PROCEDURE

Most of the Databases and Data warehouses are dominated with the

information about First Name, middle name, Last Name, Father’s name

(First Name, middle name, Last Name) Address, Email, Sex, State, City,

Country, Phone Number etc.

The person’s name normally requires at least 40 Characters. When

names are combined with Father’s Name the Character length required

for the both names become 80 Characters. Off course some names are

larger than 40-character length.

Name: 40 Alpha Characters

FIRST NAME MIDDLE NAME LAST NAME

Father’s Name: 40 Alpha Characters

Page 10: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Aftab Ahmad Malik

4

FIRST NAME MIDDLE NAME LAST NAME

We propose to code and decode name and Father’s name into numeric

Characters as shown below:

Name: 3+3+3=9 Numeric Characters

FIRST NAME MIDDLE NAME LAST NAME

234 678 231

Father’s Name: 3+3+3=9 Numeric Characters

2.1 Coding of Names

It is usual practice that where the human data is stored into computer's

memory, forty characters are reserved for one name. In this way one

individual name and his/her father's name, we need eighty characters.

For storing the information of a large number of persons working in a

particular organization the large amount of computer memory is

required. Let us now estimate of our requirements in both modes of

storing names i.e., directly in alphabetical characters and under digital

coding system.

Requirement of storage for name and Father’s Name in Alpha

Characters:80

Requirement of storage for name and Father’s Name in Numeric

Characters: 09. Therefore, with the implementation of Algorithm

presented in this paper a remarkable reduction in storage occurs,

particularly when the number of records in database is in millions. For

the purpose of the coding and decoding strategy we have divided one

name into three parts:

i. First Name

ii. Middle Name

iii. Last Name,

2.2 How our Algorithm works:

Supposing that each part of the name consists of about 12 to 14

characters. We have collected all possible names from the telephone

directory and allocated codes of three digits to them by the program

Page 11: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance

Processing Speed, Efficiency and Reduce Storage Requirements

5

NAMETAB. When the program NAMETAB is executed, the computer

is ready to ACCEPT name from the screen while the message “ENTER

NAME:" is also displayed on the screen. While entering the names, the

computer allocates a code of three digits starting from 001 to each name.

All the names collected from telephone directory have been entered and

allocated codes initially.

For example, the name “MUHAMMAD AKRAM BUTT” has three

parts:

First: MUHAMMAD

Second: AKRAM

Third: BUTT

According to the program built up two files NAME Code File (NAME—

CDFL) and NAME Decode File (NAME-DCODE) have been

maintained. In order to keep the track of the last code allocated to name,

last code is saved on a Unit Record File (LAST-NUMBR). Since we

have allocated codes of three digits to each part of the name, therefore,

nine digits are required for one name. Since two digits are stored into

one byte of memory and in this way the required bytes for storing these

digits are more efficient resulting in Net % Saving of 88.18% saving in

computer memory using our coding-decoding system.

Initially we have Name Code File and Name Decode File. Our Algorithm

NAMETAB, which initializes the Name Code File (NAME—CDFL)

and Name Decode File (NAME-DCODE). Now we also explain the

algorithm used for coding and decoding names and inclusion exclusion

of names from these files. First of all, we initialize the Name Code File

and Name Decode File by executing the program NAMETAB. A name

is accepted in working storage and checked whether it is alphabetic or

not. If it is not alphabetic, the message “NAME IS NOT ALPHABETIC”

is displayed upon the screen and no code is generated for this name. The

computer now is ready to ACCEPT another name. If the name is

alphabetic, the name is placed in the name fields of Name Code File and

Name Decode File. Initially the code is set equal to zero. Now add 1 to

code and move this code to the name-code fields of both the files and the

records are written on the files. If the name already exists in the files, the

Page 12: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Aftab Ahmad Malik

6

message “DUPLICATE NAME FOUND” is displayed and 1 is

subtracted from the code since 1 has been added before. In this way a

sufficiently large number of names have been entered in the Name Code

File and Name Decode File. In the end we enter blank to come out of the

loop of accepting name and allocating code. The last code generated is

moved to the LAST-CODE field of the Unit Record File LAST-NUMBR

and the record is written into the file.

2.2.1 Coding Algorithm:

The following algorithm is used to allot codes to names:

CODIFY-NAMES:

MOVE FNAME1 TO NAME OF CODE-REC.

PERFORM CODE-PARA THRU EXIT-POINT.

MOVE NAME—CODE TO W—FNAME.

IF MNAME1 = SPACE

MOVE ZERO TO W-FNAME

ELSE ___________________

___________________

CODE-PARA;

IF NAME IS NOT ALPHABETIC

________________________

________________________

GENRT-CODE.

ADD 1 TO LAST-CODE

__________________

EXIT-POINT.

EXIT.

The first name (FNAME1) is moved to the name field of the CODE-REC

(The record name of the Name Code File), since Name is used to access

this file. First of all, this name is checked whether this is alphabetic or

not because the Name Code File has only alphabetic names. If the name

is not alphabetic, the message “NAME IS NOT ALPHABETIC" is

displayed upon the screen and the algorithm is terminated. If the name is

Page 13: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance

Processing Speed, Efficiency and Reduce Storage Requirements

7

alphabetic, an attempt is made to read the Name Code File. If reading is

successful, it means that this particular name exists in the Name Code

File and the corresponding Name-code is moved to the specified space

reserved for the code of First Name in the Master Record. If reading is

unsuccessful, it means this particular name whose code is required does

not exist in the Name Code File and the control transfers to GENRT-

CODE para of the program to generate a new code for this particular

name. For this purpose, the Unit Record File LAST-NUMBR is read to

get the last code allotted, 1 is added to the last code field LAST-CODE.

The last code and the particular name are moved to Name-Code field and

Name field respectively of both the records CODE-REC and DECODE-

REC of Name Code File and Name Decode File. The records are written

into the files to update the files for later use. Since the names of some

persons consist of only one part for example, ABDULLAH, the middle

name (MNAME1) is first checked whether it is a blank or not. If it is a

blank, zero is moved to the specified space for code of middle name. If

it is not a blank, the same procedure of checking alphabetic, reading

Name Code File and moving the Name-Code to the specified space is

carried out.

If certain names consist of only two parts, for example MUHAMMAD

ABDULLAH the last name (LNAME1) is first checked whether it is a

blank or not. Again if it is blank, zero is moved to the specified space for

code of last name. If it is not a blank, the same procedure as for middle

name is carried out. In the end the record L-NO-REC of the Unit Record

File is rewritten to update the file. In this way the coding of name is

complete and all the aspects of the coding algorithm have been

discussed.

Decoding Algorithm: In this section we state and describe the algorithm

which is used for decoding names. The following algorithm is used to

decode the names:

DECODE-ROUTINE.

MOVE W-FNAME TO ANAME-CODE OF DECODE-

REC

MOVE MNAME TO FNAME1

Page 14: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Aftab Ahmad Malik

8

IF W-MNAME = ZERO

MOVE SPACES TO MNAME1

ELSE

________________

________________

READ -NAME-DECODE.

READ NAME—DECODE

INVALID KEY DISPLAY “SORRY UNABLE TO READ".

The First Name Code (W-FNAME) is moved to the name code field

(ANAME-CODE) of the DECODE-REC (the record name of the Name

Decode File), since the ANAME-CODE is used to access this file. An

attempt is made to read the Name Decode File (NAME-DECODE). If

the reading is not successful, the message “SORRY UNABLE TO

READ" is displayed and the algorithm is terminated. If the reading is

successful, the corresponding name is moved to the specified space for

decoded name. Now, the middle name code (W-MNAME) is checked

whether it is zero or not. If it is zero, blanks are moved to the specified

space for decoded name. If it is not zero, the same procedure of reading

the Name Decode File and moving the name to the specified space is

carried out. The last name code (W-LNAME) is also checked whether it

is zero or not. If it is zero, blanks are moved to the specified space for

decoded name. If it is not zero, same procedure as for middle name code

is carried out. Now the complete decoded name is ready for output

purpose.

Exclusion of Names from Name Code File & Name Decode File:

In this section we describe the method for excluding the names from

Name Code File (NAME—CDFL) and Name Decode File (NAME-

DCODE). If the name is not alphabetic, it is detected by the algorithm

CODIFY-NAMES discussed in the previous section and no code is

allotted to that name and the algorithm terminates. On the other hand, if

unintentionally wrong spellings of a name are entered, this time the name

is alphabetic, a code is allotted to this name and a useless record is

maintained in both the files. Moreover, an unnecessary name code is

generated. To get rid of this record, we have a separate program

Page 15: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance

Processing Speed, Efficiency and Reduce Storage Requirements

9

DELETE-NAME, which is executed as and when needed. We delete the

name and its code from both the files in order to save storage and avoid

from occupying storage for meaningless names. When the execution of

the program DELETE-NAME starts, the following message is displayed

upon the screen after erasing it.

SCREEN LAYOUT:

"ENTER NAME ************

WHICH IS TO BE DELETED FROM

NAME CODE FILE . . . . . . . . .... NAME-CDFL"

The computer now is ready to ACCEPT the name and the cursor is on

the first asterisk displayed on the first line. The name is accepted in the

Name field (NAME) of the Name Code File and an attempt is made to

delete the record from the file. If the record with this particular name

exists in the file, it is deleted and the message "RECORD DELETED

PRESS NEW LINE" is displayed upon the screen and on pressing the

key labelled "NEW LINE", the same procedure of accepting and deleting

name is carried out.'0n the other hand if the record does not exist for the

entered name, the message "RECORD NOT FOUND” is displayed and

after pressing the key "NEW LINE" the control is again at the start of the

loop. Each time a name is accepted, it is compared with the letter "A"

because we have used this letter for terminating the loop, the moment it

equals, and the loop is terminated and the control enters into another loop

which is used to delete name and its code from the Name Decode File.

For this file, as discussed in the section 4.2, the field by which it is

accessed is ANAME—CODE. In the start of the loop the following

message is displayed upon the screen after erasing it:

SCREEN LAYOUT

“ENTER ANAME-CODE ***

WHICH IS TO BE DELETED FROM

NAME DECODE FILE . . . . . NAME-DCODE"

The computer now is ready to ACCEPT the code and the cursor is on

the first asterisk displayed on the first line. The code is accepted in the

Name Code field (ANAME-CODE) of the Name Decode File and an

Page 16: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Aftab Ahmad Malik

10

attempt is made to delete the record from the file. If attempt is successful,

the message “RECORD DELETED’ PRESS NEW LINE” is displayed

and on pressing the key "NEW LINE", the same procedure of accepting

and deleting the record is carried out. On the other hand, if the attempt

of deleting results unsuccessful, the message “RECORD NOT FOUND

“PRESS NEW LINE” is displayed and after pressing the key "NEW

LINE" the control goes to the start of the loop. Each time a Name-Code

is accepted, it is compared with the value of a figurative constant ZERO

because we have used zero to terminate the loop, the moment it equals,

the control comes out of the loop. The Unit Record Pile LAST—-

NUMBR is updated by entering the last code from terminal for later use.

Now the computer asks whether the complete list of names and their

codes is required or not. If we enter "N" the program is terminated

without giving the list. On the other hand, if we enter "Y", the Name

Code File (NAME-CDFL) is first made ready for sequential reading with

the START statement, since it is an indexed file, and a complete list of

names and their codes is prepared by repeatedly reading the Name Code

File sequentially.

3. CONCLUSION

The purpose of this research is to reduce the storage requirement of a

large Data warehouses like NADRA. In case the scheme is implemented

the speed for on line transactions will be enhance with reduction in 80%

of storage requirement.

4. ACKNOWLEDGMENT

The author wishes to thank Vice Chancellor, Dean and Engineer Mujtaba

Asad, Department of Computer Sciences Lahore Garrison University,

Lahore.

Page 17: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance

Processing Speed, Efficiency and Reduce Storage Requirements

11

REFERENCES

[1]: Danial M bikel, Richerd Schwartz, Ralph M. Weiischedel,” An

Algorithm that learns what is in a name”, Machine Learning 34,211-

231(1999)

[2]: Oracle decode Syntax

https://www.docs.oracle.com/cd/B19306_01/server.102/b14200/functio

ns040.htm

[3]: Wübben, D; Böhnke, R; Rinas, J; Kühn, V; Kammeyer, K D.

“Efficient algorithm for decoding layered space-time codes”,

Electronics Letters 37.22 (Oct 25, 2001): 1-2

[4] Oracle decode Function: http://www.oradev.com/decode.html

[5]: David Declercq, ETIS ENSEA,” Decoding Algorithms for

Nonbinary LDPC Codes Over GF(q)”, IEEE Transactions on

Communic...Volume: 55 Issue: 4

http://www.ieeexplore.ieee.org/document/4155118/

[6]: Database SQL Reference; Oracle Database Online Documentation,

10g Release 2 (10.2) / Administration

https://www.docs.oracle.com/cd/B19306_01/server.102/b14200/functio

ns040.htm

[7]: Mene Mene, Tekel:” SQL an,d its Sequels,Muising on MySQL”,

http://www.ocelot.ca/blog/blog/2013/09/16/representing-sex-in-

databases/

[8]: Nadra: National Database and Registration Authority, Pakistan.

https://www.nadra.gov.pk/

[9]: Asim Sardar “Case Study: Information System Reforms for

improving Governance “,

https://www.siteresources.worldbank.org/PSGLP/Resources/Informatio

nSystemsReformsforImprovingGovernanace

Page 18: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Aftab Ahmad Malik

12

[10]: Vincent Rainardi Data Warehouse and Business Intelligence,

https://dwbi1.wordpress.com/2015/04/23/code-decode-table/ 23 April

2015

Page 19: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp. 13-21

13

Analyzing and Resolving Issues in Software Project Risk

Management

2Taimoor Hassan

Shoaib Hassan

Abstract: - In last decade the main reason for projects failure is poor

management of software. But now a day’s most of the organizations are

focusing on software project management for making project successful.

Software project management provides overall management of software

from project planning phase to project execution. In software project

management we also deal with risks that may occur during development

of projects. In this paper we analyze risks during management of

software and we resolve issues that come in software project risk

management. We introduce some approaches by which we can resolve

all the issues regarding software risk management. Risk management

also suggests us that how we can avoid risks and if risks occur then how

we can control those risks. By analyzing software risk management, we

come to know that what factors affect risk management and how we can

remove them. Software risk management manages all risks efficiently

and makes our project successful.

Keywords: - Risk Management, Risk identification, Risk Analysis,

Risk mitigation

—————————— ——————————

1. INTRODUCTION

Risk management is one of the important knowledge areas of software

project management. But most of the organizations do not give

importance to this knowledge area because risk management has lowest

Taimoor Hassan, Department of Computer Science

Lahore Garrison University, Lahore

Shoaib hassan, COMSATS Institute of Information Technology, Sahiwal, Pakistan

[email protected]

[email protected]

Page 20: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Analyzing and Resolving Issues in Software Project Risk Management

14

maturity rating as compared to other knowledge areas but it has high

impact for making project successful [4]. Risk management provides all

the strategies about risk handling including risk avoidance, risk

prevention, risk identification, risk analysis and risk mitigation. Most of

the projects have high risks and risks may occur at any time in software

development process. If we want to enhance the functionality of our

software, then we must deal with risks. There are many factors that make

project successful and risk management is one of the greatest factors that

are responsible for success of any project. Many errors that occur in

software development depend upon risks. Risks occur due to many

reasons including incomplete user requirements, human error, natural

disaster, poor project objectives and lack of resources. One of the main

reasons for creating risk is incomplete user requirements. If requirements

are not completed and consistent then project will fail. Risk management

is a complete process for handling risks and make project successful. It

is a complete process from identifying risks to monitoring and

controlling the risks. In Risk management we can also manage positive

and negative risks. Positive risks are those risks that have good effect on

project and negative risks are those risks that negative effect on project.

Positive risks cause project successful and negative risks cause project

failure. So our main focus is to increase positive risks and reduce

negative risks. Risk management is a process that directly connects with

customer satisfaction. This satisfaction is called risk utility. It will be

high for those people that want high satisfaction or that are risk seekers.

And in risk neutral approach we make balance between risks and

potential payoff.

The paper is organized as follows. Next section we describe the

methodology. Section 3 presents the results. In Section 4, we summarize

our conclusion and last Section 5 represents acknowledgment and last

section shows references.

2. METHODOLOGY

First, in this paper we discuss different aspects of software risk

management. As risk management is process of removing uncertainties

Page 21: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Taimoor Hassan & Shoaib Hassan

15

that may come in development of project. In risk management we firstly

identify risks that may occur then we analyze those risks. After analyzing

risks, we manage those risks. We analyze problems that occur in

software risk management and also give valuable solution to resolve that

problems. Risks are the main factors that come frequently in software

project management. There are different kinds of risks that affect project

management process like organizational risks, external risks and internal

risks [7].

External risks mean risks that occur due to some government policies or

due to any unwanted condition in country. For example, company will

stop the project due to high political pressure. For resolving that kind of

risks organization should recommend the project charter from high

authorities of government so that organization can get maximum benefit

from project. Because when project will recommend then project team

will utilize all its resources without any fear to make project successful.

Project should be morally and socially. It should be beneficial for

society. This is another factor that should keep in mind when deal with

such kind of risks. If we ignore these factors, then it will be a high chance

that our project will fail. Internal risks mean risks that come due to

human error, lack of resources, natural tragedy, incomplete user

requirements and project complexity. Such risks cause project delay.

Human error risk come when we assign tasks to a developer who does

not know what should be done in project. Due to less expertise in

software development human error will come. Due to that risk repute of

an organization becomes bad and organization will lose its future

projects. For resolving that risk organization should hire people that are

domain specific and should have better software project management

skills. However good developers will take more money but it is better to

invest in start of project rather to lose the project. Lack of resources is

also a big risk that comes in maximum software projects. Most of the

organization start their projects without knowing either they have enough

resources to complete this project or not. And this is the main reason for

bad organization’s repute. In some cases, organizations have enough

resources but it is high possibility that main software developer leaves

Page 22: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Analyzing and Resolving Issues in Software Project Risk Management

16

organization during project development. Another problem is bad

estimation of cost. In software project management we assign some cost

to any project then it may be possible that project overruns that cost. In

this case project will also fail. For resolving that issue we should attach

some person with main software developer so that if main developer

leaves company then project should not stop. Secondly we should

estimate cost of project accurately. Cost estimation should be based on

requirements [1].

We decompose our project in many milestones and we assign cost to

each milestone. When we assign cost to each milestone then it will help

us to do the tasks within budget. These techniques will resolve our lack

of resources related risks. Another big risk is incomplete user

requirements. When requirement engineer takes requirement then it is

high possibility that developer will confuse about those requirements

because if requirements were written in natural language then there is a

high chance of ambiguity. So requirements should be clear and complete.

When we do requirement engineering of any project then it is necessary

to take requirements that should complete and consistent. Consistent

means that requirements should not conflict with one another. For

resolving that issue organization should arrange some workshops for

collecting requirements from users. Workshops will create a friendly

atmosphere and user will discuss all his requirements with organization

and if there is any ambiguity in any requirement then developer will

directly ask from user about that requirement [3]. Second way to resolve

that issue is to write the requirements in a proper standard format like

software requirement specification. And as a result is failure of software

project. For resolving that issue we decompose our project into small

units. We assign each unit to a specific team. This approach will reduce

the complexity of project. Organizational risks are those risks that may

occur due to organizational policies and rules. So we should make

project that fulfills organization standards. As risk management is a

complete process of managing the risks so for dealing all the risks that

may come in software project management we make a complete model

that will handle all the issues regarding software risk management. It will

Page 23: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Taimoor Hassan & Shoaib Hassan

17

also manage all the risks efficiently. When we develop any software then

we avoid all the risks that may come in software management process.

But if risks occur then we use a strategy for dealing with risks. Firstly,

we identify all the risks that occur during development of software. We

write all the risks in tabular form with their description. After

identification we analyze all the risks. In risk analysis we analyze which

risk may occur most and which risk may occur less. We find probability

of each risk in the project. Probability means we analyze how many times

a risk may occur. We also find impact of each risk on the project. After

analyzing impact, we multiply impact of each risk with the probability

of that risk to find risk factor. So this whole process gives us total

information about each risk. Based on risk analysis we determine which

risk has most probability and which risk has less probability. By keeping

probabilities in the mind we give priorities to all risks. Risks that have

high priority will serve first and risks that have low probability will serve

in the last [2]. We can represent risk analysis phase by table 1 (Risk

Description). After analyzing all the risks, we move to third step of risk

management which is risk mitigation. Risk mitigation is an approach by

which we can reduce the scope of the risk. First we try our best to avoid

risk but if risks occur then we do risk mitigation to handle with risks. In

risks mitigation we do some actions to decrease the probability of risk

and also its impact on the project. Risk mitigation has three approaches

for managing risks.

Avoiding Risks

Monitoring Risks

Contingency Planning/Possible planning

In avoiding risk approach, we make a complete project risk plan in which

we define probability of each risk and also impact of each risk on the

project [5]. The main advantage of risk plan is that we can avoid risks

before occurrence. In monitoring risk approach, we monitor all the risks

that were mentioned in the risk plan. We check probability of each risk

and also impact of each risk on the project. Monitoring of risks will help

us to manage the risks efficiently. In monitoring we also check which

risk has high probability and which risk has low probability so that we

Page 24: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Analyzing and Resolving Issues in Software Project Risk Management

18

can give priority to risks and then dealing risks according to their

priorities.

In possible planning approach, we also find probability of each risk and

do some actions for resolving risks issues as soon as possible. In possible

planning phase we also make again whole the plan which includes all the

details regarding risks but this plan is different from risk plan that we

make in risk avoidance phase. Because in avoiding risk phase we make

plan for risks that may occur in project. But in possible planning phase

we make plan for the risks that occur in reality. So this whole risk

management process gives us a complete path for handling risks in a

better way. We can represent this risk management process by given

figure1.

Figure 1: Software Risk Management

The overall risk description, risk probability, risk impact and overall

factor are given below in table:

Page 25: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Taimoor Hassan & Shoaib Hassan

19

3. RESULT

Risk Management Process gives us a complete path for managing our

risks in an efficient way. In risk identification phase we have complete

description of our risks. Risk analysis gives us probability of each risk.

It also gives us influence of each risk on the project. With the help of risk

probability and risk impact we find risk factor of each risk. It also tells

us which risk has highest priority and which has low priority. So with the

risk analysis results we manage our risks according to priority based.

Last approach which is risk mitigation reduces the scope of risk.

We use this technique when risks occur. It reduces the scope of risks.

Monitoring gives us overall Description of risks probability and gives us

different ways to resolve risks. We use Contingency planning only when

our original plan fails. Contingency planning that also called possible

planning gives us alternative plan for dealing with risks. Possible

planning also has complete plan for managing risks including risk

description, risk probability, risks impact and risk factor. So this whole

Risk Description Risk Probability Risk Impact Factor

External Risk (Political

Pressure, Safety Precautions 0.8 10 8

Product Risks (Incomplete User

Requirements, Lack of

understanding of project

objectives)

0.4 8 3.2

Process Risks (Natural disaster,

human error) 0.6 10 6

Organizational Risks

(Organizational Standards,

Organization rules and domain) 0.2 6 1.2

Table 1. Risk Description

Page 26: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Analyzing and Resolving Issues in Software Project Risk Management

20

software risk management process gives us a complete framework for

managing risks in an efficient way.

4. CONCLUSION

Software risk management is a framework for dealing with different

kinds of risks that occur in any project. It is a complete pathway for

resolving all the issues regarding risks that occur in software project.

Success of any project mostly depends upon its management. If software

is well managed, then there is high possibility that project will success

[8]. In software project management the factor that has greatest impact

on software management process is risk. So most of the organizations

have focus on software risk management.

Software risk management gives us a complete strategy to avoiding risks,

preventing its effect and managing those risks. It also gives us a better

understanding of risks occurrences. It makes project easier and less time

consuming. Software risk management reduces the complexity of

project. Decisions can be taken easily based on risks. It improves overall

software management process. It saves a lot of money of customer. It

makes the environment of an organization risk friendly. It also maintains

good repute of an organization.

5. ACKNOWLEDGEMENT

I would like to thanks my Allah Who gave me strength, knowledge and

ability to complete this research paper and also thanks my younger

brother and my wife who motivated me with their precious help and

support.

Page 27: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Taimoor Hassan & Shoaib Hassan

21

REFERENCES

[1] Risk Management System for ERP Software Project. Abbas, Dr.

Muhammad, Fakhar, Muhammad Zaman and Madiha, Waris. London,

UK : IEEE, 2013. Science and Information Conference. p. 6.

[2] Risk Management in Global Software Development Projects:

Challenges, Solutions, and Experience. Münch, Prof. Dr. Jürgen. s.l. :

IEEE, 2011. International Conference on Global Software Engineering

Workshop.

[3] Collaborative Risk Management in Software Projects. Silva, Pedro

Sá, Trigo, António and Varajão, João. Lisbon : IEEE, 2012. Eighth

International Conference on the Quality of Information and

Communications Technology. p.4.

[4] A Study Of Software Development Project Risk. Tao, Ye.

Leicestershire, United Kingdom : IEEE, 2008. International Seminar on

Future Information Technology and Management Engineering. p. 4.

[5] Development of software project risk management model review .

Tianyin, Pu. Deng Leng : IEEE, 2011. 2nd International Conference on

Artificial Intelligence. p. 4.

[6] A Model of Software Project Risk Management Based on Bayesian

Networks. Tianzong, Li and Qiang, Wang. Beijing, China : s.n., 2010.

International Conference on E-Business and E-Government. p. 4.

[7] Understanding software project risk: a cluster analysis. Wallace,

Linda, Keil, Mark and Rai, Arun. s.l. : ELSEVIER, 2004, Information &

Management.

[8] How Software Project Risk Affects Project Performance: An

investigation of the Dimensions of Risk and an Explorator Model.

Wallace, Linda. s.l. : IEEE, 2012, Decision Sciences Institute.

Page 28: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp. 22-36

22

Algorithm and Technique for Animation Syeda Binish Zahra3

Abstract: - Fluids simulation particularly water courses such as rivers

are an important element to achieve realistic simulations in real-time

applications like video games. This work presents a new approach called

SiViFlow that simulates watercourses in real-time. The algorithm is

flexible enough to be used in any type of environment and allows a river

to be dynamically generated given any riverbed. The component that

manages the flow is responsible for the water animation and allows the

use of various techniques to simulate visual features. As all the

information is dynamically generated, SiViFlow also reacts to dynamic

objects that come in contact with the river, properly adjusting the course

of the flow. This work helps accelerate and improve the methods of

creating realistic rivers so that they can be used in video games.

—————————— ——————————

1. INTRODUCITON

SiViFlow is composed by two main elements: The Simulation Engine

and the Visualization Engine. The Simulation Engine is where all the

calculations related with physics of the river take place. This engine is

divided in three main modules: The River Surface Generator, the River

Particle Generator and the Flow Texture Mapper. From the programming

point of view, the River Particle Generator and the Flow Texture Mapper

make up a larger block called the River Particle Processor which will be

described later in detail. The Visualization Engine is responsible for

receiving the simulation data from the Simulation Engine and to output

a graphical representation. This engine is divided in two main modules:

The Flow Renderer and the Reflection. The description of VisiFlow is

depicted in Figure 1 with all of its elements.

Syeda Binish Zahra

Department of Computer Science Lahore Garrison University Lahore [email protected]

Page 29: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

23

In the River Surface Generator, we start by generating the river surface

mesh that will be used to apply the material and where the water

animation algorithm will be rendered. At this stage, we have to calculate

several features that will be needed in later stages such as river width,

which vertices define the shore, the flow in each vertex, amongst others.

The next stages are River Particle Generator and the filling of the flow

and auxiliary textures in the Flow Texture Mapper. These two stages

make up the application loop that runs in the CPU. In this loop we

generate randomly distributed points that cover as much as possible of

our domain in screen space and from those points we create a concept

called river particles. These textures are lled with river particles so we

can send their features to the GPU, which in turn allows us to update

these river particles every frame. In the end of SiViFlow we have the

rendering of the material which uses the textures that were sent from the

CPU. At this stage we use the Visualization Engine to render all visual

and physical effects such as flow and reflections.

Figure 1: CPU Verses GPU

Page 30: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

24

2. RIVER SURFACE GENERATOR

The first stage of all is the River Surface Generator. At this stage a river

surface mesh needs to be created, which can either be done using an

external modeling application or generated in real-time. Both options are

viable and don't interfere with the next phase as long as we have access

to the river mesh vertices. In both cases all we require is a mesh which

will describe the river surface. The meshes we used assumed that the first

vertex was one of the corner vertices of the river surface mesh.

At the beginning we don't know how many vertices go from one shore

to the other in one single section of the river, so we start by calculating

the river width and fag which vertices can be considered shore vertices.

A river section is a set of vertices that are placed between two shore

vertices and form a line that is perpendicular with both river shores. In

order to find out which vertices are shore vertices, we start by identifying

the first vertex from the river mesh and calculate differences in distance

between this vertex and all the other vertices that follow. When we reach

the end of the river section we're processing, the difference stops

increasing and it means we've reached the vertex which is on the same

shore as our first vertex (the shore vertex right next to the one we're

processing). This means the last vertex we processed belongs to the

opposite shore. This idea is very similar to the one we used in Algorithm

1 to calculate the river width and works in a similar way.

We didn't consider different widths across the river sections as they don't

affect any of the other modules of the algorithm and it only means that

if one wants to introduce them, all that's required is to create a more

sophisticated way to calculate the river width for each section in order to

find out the amount of vertices of the mesh that go from one shore to the

other. These ways later stage of the algorithm will know for each section

what the correct width of the river is.

Algorithm 1 sums up all the steps taken during this pre-processing phase.

The only input information required is the river mesh vertices. The

algorithm starts looping from the first vertex which we know it's a shore

vertex as it's located in a corner of the river mesh. We compare the width

between this first vertex and the following vertices, making sure to

always store a new width if the value is larger than what was previously

Page 31: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

25

stored. When the section of the river ends and we're processing the shore

vertex which is on the same shore and right next to the first one, the

distance between both vertices will be smaller than the full width of the

river. We store the current width value and the amount of vertices that

go from one shore to the other. As we've mentioned before, if different

river widths were a requirement, all that would be needed to do was to

create a more sophisticated algorithm that would be able to know when

a certain river section had ended and use that information to store for

each river section its width. At this stage we know the river width at each

section as looped through all the river sections that compose the river

surface. We also know the amount of vertices that go from one shore to

another, allowing us to flag the vertices that belong to the river shore.

These vertices need to be handled differently because they'll be used for

calculating the flow. Now for each vertex in the river mesh, we store its

distances to each of the river shore vertices at their river section. This

information will later be used to calculate the flow velocity. Lastly we

calculate the river flow at each river section, storing the information in

every vertex. Both the flow velocity and flow generation will be

described in more detail in the following sections.

2.1 Flow Generation

At this stage all shore vertices are identified and we need to generate the

flow vectors that later will be passed to the river material. In order to

calculate the flow, we pick two shore vertices in the same river section,

and then we calculate their midpoint and translate in the positive up axis,

as shown in Figure 3 where the up vector used is aligned with the y axis.

With these three points we can create a vector that is perpendicular with

the river section being processed. As the flow is constant for each river

section and is parallel to the margins, the normal vector of the plane

describes correctly the flow direction of that section as shown in Figure

3. As the plane generated has two possible normal vectors, the normal

generation procedure must take into account this direction and return the

correct normal vector. In the end we have a flow field that is as detailed

as the mesh of the river surface and where each vertex contains its own

flow vector stored as shown in Figure 4. One advantage of generating

the flow this way is related with its edibility to dynamically recalculate

Page 32: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

26

the flow when an object interacts with the river. In case a dynamic object

alters the course of the flow, the boundaries of the object will be used to

recalculate the new flow and will substitute the shore vertices that were

previously used. As the values are tied to the river mesh, as long as we

know the collision vertices, SiViFlow is able to recomputed the flow of

the river and immediately reflect the changes.

Algorithm 1: River surface generation algorithm

Input: Set of control vertices that define the river surface mesh1

RiverSurfaceGeneration(ControlVerticesSet )

vertices ControlV erticesSet

forall the vertices do

if vertex is a shore vertex then

Flag it

RiverDistance (vertices) // For each vertex store river width

DistanceToMargins (vertices) // For each vertex store distance

to each margin

CalculateFlow (vertices) // For each vertex calculate and store

flow

RiverDistance(Vertices )

iterator 0 // iterator

maxV ertices V ertices:length // length of the Vertices

vector

dist 0 // value to hold the maximum distance obtained

while iterator < maxVertices do

distCmp = distance(Vertices [0], Vertices[iterator])

if distCmp < dist then

dist = distCmp

iterator = iterator + 1

DistanceToMargins (Vertices)

iterator 0 // iterator

maxV ertices V ertices:length // length of the Vertices vector

while iterator < maxVertices do

if Vertices[iterator] not a shore vertex then

Calculate and store distance to left shore

Calculate and store distance to right shore

iterator = iterator + 1

CalculateFlow(ShoreVertices )

Page 33: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

27

forall the Pair of shore vertices do

midpoint CalculateMidpoint(lShoreV ertex; rShoreV ertex)

flowV ector CalculateP laneNormal(lShoreV ertex; rShoreV

ertex; midpoint)

forall the vertices in this river section do

Store flowVector

2.2 Flow Velocity

2.2.1. Stream Function

In order to calculate the interpolated value of the stream function ( ), we

use an interpolation scheme suggested by [2][3]. At this stage we have

all the information required to calculate the following equations. We run

for each vertex all the Equations 2.1 2.2 2.3 and store their values.

With P being the position of each river surface vertex, di the distance

from point P to the each of the boundaries and the weighting factor is ω:

Where s is the radius used to search for boundaries, p is a positive real

number and f is defined as:

As we didn't change the interpolated stream function method created by

[4], we must guarantee that at least two boundaries are inside every

2.1

2.3

2.2

Page 34: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

28

vertex search radius. This guarantee is very important as the initial

premise that the flow rate between any two points in a flow field is equal

to the numerical difference in boundary values of that channel would be

false in case only one boundary was found, invalidating this scheme and

returning undefined values.

2.2.2. River Particle Processor

In this section we'll introduce the concept of river particles. These river

particles are used as way to sample information from our domain and

retrieve its values. As we want to be able to handle large watercourses,

it's not feasible to rely on loading all the river surface information to

VRAM every frame. In our case we're interested in getting only the

visible river mesh values so we can retrieve and send them to be rendered

on the GPU. One of the main features of the river particles is that they're

created in screen space in order to guarantee a uniform distribution of

the particles over the visible domain at each frame. The reason for

generating these points in screen space is that as each particle contains a

defined radius to make sure no two particles are too close to each other,

analyzing this problem in screen space guarantees that these radius disks

maintain a uniform radius, something that would not happen if they were

projected in world space. Another advantage of this scheme is that we

only process visible information as we eliminate all non-visible particles

which minimizes the waste of resources. There are some similar

approaches to ours such as texture sprites and wave sprites which present

an analogous solution adapted to the context of those works. The

following sections present how we generate river particles, how we store

their information and how we prepare these particles to be efficiently

sent to the GPU

2.2.3. RIVER PARTICLE GENERATOR

We start by generating several randomly distributed points, generating a

Poisson-disk pattern using a modified boundary sampling algorithm.

In the end of running this algorithm, we end up with a set of points that

we'll convert to river particles. In order to generate a 3D world position

for each of these points (after being generated we only have their 2D

coordinates) we proceed. A ray is cast for each particle and we store the

collision point between the ray and the 3D world. Using this method, we

Page 35: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

29

can compute at each frame, for each point, its 3D world position. Besides

calculating the world position, we also calculate other features such as

global identifiers, velocity and flow.

Unlike other algorithms, we don't advent our particles during our CPU

update loop. The reason for this is due to the fact that our particles aren't

concerned with the fluid's motion, they're simply a way to sample the

necessary information in screen space and send it from the CPU to the

GPU. An inherent advantage of not having to advent particles during the

update loop is that it allows us to load the work from the CPU to the

GPU.

All of this information will allow us to find out in the next stage; what's

the nearest flow data to load into the flow texture. We just search inside

a radius r for the closest vertex and assign that flow information to the

river particle. This step differs from as they first render the river surface

to a buffer inside the GPU, find out which particles are inside the river

surface and then query each individual pixel to find out which particle

sits inside. Our approach despite being a bit more computationally

intensive doesn't have the inherent problems that might arise from

relying in performing constant transfers between the CPU and GPU.

3.2 Flow Texture Mapper

In order to feed the GPU with the information required to render the flow,

we used a flow texture and an auxiliary texture. Similar ideas have been

explored by other authors to achieve similar objectives. We store all the

information we need inside each color channel and read it back when it

reaches the GPU. This approach of using an auxiliary texture to carry

data into the GPU allows us to update every frame the contents of these

two textures, refreshing the particles and their respective values. One of

Page 36: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

30

the disadvantages is how their flow texture size must be as close as

possible to the

Figure 2: Flow vectors

application resolution being used and with that the radius of the Poisson-

disk needs be larger too. The increment in both these elements prevents

their approach from being executed in high screen resolutions that are so

common today. In our case, our flow texture and Poisson-disk may be

much smaller than the screen size as we don't need a description of the

domain. The reason behind this is due to the fact that both our river flow

and speed don't change dramatically from one vertex to the next,

meaning that if we want we can keep a much smaller copy of the flow

texture when compared with the screen size.

Figure 3: Texture at screen

These textures will store the river particles previously generated using

each of the color channels of the texture. In the flow texture we'll store

for every entry data such as the global identifier of the river particle and

Page 37: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

31

its respective flow. The identifier in this texture will be used as a way to

look-up the remaining data from the auxiliary texture. For each entry of

the flow texture, we store the flow information that covers that pixel. The

auxiliary texture will have other parameters such as velocity, river bed

slope and river depth. In Figure 3 we can see how each river particle is

stored in a smaller sized version of the flow texture and how the global

identifier for each particle will be used to address the auxiliary texture.

In Algorithm 2 we can see that the whole update process is performed at

every frame update. First we start by having to delete the particles that

are not visible as they are wasting resources and won't a

etc. the final result. Then we need to delete the particles that are too close

to one another violating the initial.

Poisson-disk requirement that all particles must be no closer to each

other more than a specified radius distances. In order to keep a

reasonable number of particles in screen, after deleting all the

unnecessary particles we generate new ones using the previously

mentioned algorithm. After this, for all new particles, we have to convert

them to river particles by calculating all their features. To end the

algorithm, we fill the flow and auxiliary textures with the current data

from that frame and get them ready to be sent to the GPU.

Page 38: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

32

3. VISUALIZATION ENGINE

The Visualization Engine is the last stage of SiViFlow and consists of

mapping a material to the river surface mesh. This stage is divided in two

main elements: The Flow Renderer and the Reflection algorithm. We

start by accessing the flow texture and consult the river particle identifier

of this pixel. In order to optimize the texture look-up, the flow

information is also saved during this operation. Now we can use the river

particle identifier to look-up the rest of the parameters contained inside

the auxiliary texture.

We use the flow information to generate a new normal vector using the

Tiled Directional Flow algorithm and use this new normal to compute

the scene's reflection. In the end all the elements are blended together.

All the steps of the algorithm are summed up in Algorithm 3.

4. FLOW RENDERER

The flow algorithm used is based in the approach proposed called "Tiled

Directional Flow". Our approach uses a similar concept to render the

flow. One of the main differences is that all the flow information being

fed to the algorithm isn't based on a fixed flow map but comes from our

flow and auxiliary textures. This allows us to work with a much smaller

amount of information at each render cycle because our flow texture only

contains information that's visible during that frame. The fact that our

flow texture is updated every frame, means that we can change the flow

if any dynamic object changes river flow.

Page 39: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

33

4.1 Tiling of the water

The way the Tiled Directional Flow works is by dividing a river channel

in tiles, similar to a chess board. We show this division where we painted

some tiles with black color in order to make it easier to visualize what

happens. Each tile is independent from its peers and it's composed by

several normal maps. This tiling allows this algorithm to have several

normal maps combined per region, that when seen as a whole don't

resemble the usual texture scrolling seen in most video games

implementation. This visual advantage combined with an adaptive flow

system as ours, allow the river to behave in a realistic way and react to

any interactions.

4.2 Normal Maps Composition

Normal mapping is a technique which modifies the per-pixel shading

routine of a mesh in order to fake the lighting of bumps and dents [6]

[10]. Usually a normal map is created from a highly detailed mesh and

used to fake details in a simplified mesh with much less polygons. In

order to get a more convincing look, we used for each tile four normal

maps that are combined and blended together. First the regular normal

map is loaded for the tile being processed. After that we sample a normal

map with half a tile shift in the x direction and we rotate it in order to

have independent features from the previous normal map. These two tiles

are blended together using a blending factor. The next two normal maps

follow the same idea, the first one is sampled with a shift in the y

direction and the second is shifted in the x and y direction. Both these

normal maps are rotated and combined together using the same blending

factor. To get the final normal value, both normal maps that were

combined using the blending factor are blended once more. To conclude

this final blending step of normal maps a scaling operation has to be

performed. This scaling operation avoids the problem of having a

resulting normal closer to the actual average normal, which is common

when several normal vectors are added together.

4.3 Reflection

In order to simulate dynamic reflections of objects on our river surface

we used a method commonly called planar reflections. This approach has

been widely used since the introduction of the programmable pipelines

Page 40: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

34

because of its ease of use and how inexpensive it is in terms of resources.

This technique is based on the use of a texture called a reflection map,

which is an inverted version of what it's visible above the water level and

that we want to reflect. To obtain a reflection map, we start by defining

a clipping plan, which has to be about the same height as the river

surface. This clipping plane will be useful to cut all the geometry below

the river surface that we're not interested in having rendered. If we didn't

clip the contents below the river surface, we would reflect also the

contents of the river which would break all illusion of reflection. After

that we save an inverted copy of this clipped scene to a texture.

5. CONCLUSION

This Document presented a new flow visualization algorithm called

SiViFlow and explained each of its components. We start by generating

a river surface mesh and calculate several attributes that will be useful

for the next stages of the algorithm. These attributes include river width,

width of each vertex to both margins and the flow of each vertex. We

start the update loop of the algorithm where we first update the state of

our river particles by creating, deleting and updating river particles and

then we fill the flow and auxiliary textures with data. These textures are

sent to the GPU where it reads them inside our Visualization Engine and

outputs the appearance of a river flowing.

Page 41: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Syeda Binish Zahra

35

REFERENCES

[1]. Tomas Akenine-MUoller, Eric Haines, and Natty Ho

man. Real-Time Rendering 3rd Edition, chapter Reections, pages

386{391. A. K. Peters, Ltd., Natick, MA, USA, 2008.

[2]. Robert Bridson. Fast poisson disk sampling in arbitrary dimensions.

In ACM SIGGRAPH 2007 sketches, SIGGRAPH '07, New York, USA,

2007. ACM.

[3]. Robert Bridson, Ronald Fedkiw, and Matthias Muller-Fischer. Fluid

simulation: Siggraph 2006 course notes. In ACM SIGGRAPH 2006

Courses, SIGGRAPH '06, pages 1{87, New York, USA, 2006. ACM.

[4]. Yuanzhang Chang, Kai Bao, Youquan Liu, Jian Zhu, and Enhua Wu.

Particle importance based uid simulation. In Proceedings of the 2009

Sixth International Conference on Computer Graphics, Imaging and

Visualization, CGIV '09, pages 38{43, Washington, DC, USA, 2009.

IEEE Computer Society.

[5]. Nuttapong Chentanez and Matthias Muller. Real-time simulation of

large bodies of water with small scale details. In Proceedings of the 2010

ACM SIGGRAPH/Euro graphics Symposium on Computer Animation,

SCA '10, pages 197{206, Aire-la-Ville, Switzerland, 2010. Euro

graphics Association.

[6]. Jonathan Cohen, Marc Olano, and Dinesh Manocha. Appearance-

preserving implication. In Proceedings of the 25th annual conference on

Computer graphics and interactive techniques, SIGGRAPH '98, pages

115{122, New York, USA, 1998. ACM.

[7]. Jonathan M. Cohen, Sarah Tariq, and Simon Green. Interactive uid-

particle simulation using translating eulerian grids. In SI3D, pages

15{22. ACM, 2010.

[8]. Mathieu Desbrun and Marie-Paule Gascuel. Smoothed particles: a

new paradigm for animating highly deformable bodies. In Proceedings

of the Euro graphics workshop on Computer animation and simulation

'96, pages 61{76, New York, USA, 1996. Springer-Verlag New York,

Inc.

Page 42: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Algorithm and Technique for Animation

36

[9]. Daniel Dunbar and Greg Humphreys. A spatial data structure for fast

poisson-disk sample generation. ACM Transactions on Graphics,

25(3):503{508, 2006.

[10]. Wolfgang Engel. ShaderX Shader Programming Tips and Tricks

with DirectX 9, chapter Rippling Reective and Refractive Water, pages

357{362. Wordware Publishing, 2003

Page 43: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp. 37-43

37

Effects of Mobile Phone Radiation on Human Health

Shazia Saqib 4

Abstract: - In past 20 years the use of mobile phone has increased

manifold. After 2015 there are more mobiles than humans on this earth.

This makes the world a hub of communication as well as Centre of social

network. There is a huge growth of antennas and towers to support

increasing trend of mobile. Along with that, there is a growing concern

that radiations emitted by Mobile Stations or these towers are dangerous

for health. Although many believe that it is yet to be proved that these

radiations are injurious to health, especially harmful to heart, brain,

nervous system and many more, but a large number of mobile users

know that mobile radiations are dangerous for sure. This research is an

effort to find actual impacts of radiations by mobiles on human health

and to find the impact of heavy mobile usage on our society as physical

contact has been now replaced to peer to peer (p2p) on the phone.

Keywords: - Ionization Energy, SAR, Electromagnetic Spectrum.

—————————— ——————————

1. INTRODUCTION

In the past decade or two advancements in computing and

communication shows the world is changing very fast. In closed

protective environments mainframe systems has been replaced by

modern computing devices e.g. laptops palmtops etc. legacy system

faced immobility problem due to size and energy requirements.

The heterogeneity of the technology achieves the autonomy in accessing

the information which enable accessing of information anywhere. The

traditional solid state medium of networks has been replaced with

Shazia Saqib

Department of Computer Science

Lahore Garrison University

Lahore, Pakistan

[email protected]

Page 44: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Shazia Saqib

38

wireless transmission medium which are the part of electromagnetic

(EM) spectrum. This transformation of technology has taken society by

storm [1].

As these technologies are spreading business, conflicting issues arise

side by side.

Independent research provide serious consequences about our living

environment is highly saturated with electromagnetic fields. The

competent authorities confirm that observations are unknown to them.

All these devices including mobile phones, base station antennas on

towers and buildings throughout the world emit radiofrequency (RF)

radiation. The strength of these radio waves is higher in populated areas.

The radiations are of two types: Ionizing Radiation and Non-Ionizing

Radiation [1]

In the electromagnetic spectrum with short wave and high frequency,

ionizing radiation is highest contributor. The exposure of ionizing

radiation causes burns, hair loss, cancer, birth defects in infants,

immediate death and other illnesses. The effect of Ionizing radiation for

an atom is enough to pull an electron, thus creating an icon.

Figure 1. (Ericsson, 2015)

Page 45: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Effects of Mobile Phone Radiation on Human Health

39

Although high level ionizing rays are dangerous for health, but low level

exposure shows benefits in the medical field to make easier to diagnose

problems. The 300 joules ionization radiation with short wave and high

frequency is enough to kill human being. If we increase the wavelength

and low the frequency of ionization the effect will much low like 1.5

million joules of radiation will be equal to 300 joules. High wavelength

and low frequency ionization radiation includes infra-red light,

microwaves, and radio frequency [1].

Specific Absorption Rate (SAR) measures the health effects of the non-

ionizing radiation. Which defines, as the power absorbed per mass of

tissue and has units of watts per kilogram (W/kg) [1]. There are many

countries that set the SAR's maximum levels for modern handsets in

many countries. FCC has set a SAR limit for USA of 1.6 W/kg, averaged

over a volume of 1 gram of tissue of head and 2W/Kg over volume of 10

grams of tissue in Europe [2].

The base stations and the Cell Phone strength in an area determines the

Radio Frequency measure.

• average talk time of cell phone in a month as little as 500 minutes can

increase the probability of brain cancer by 140% to 300%.

• Cell phone radiation damages living DNA.

• Cell phone radiation affects sensitive brain tissues.

• Six years use of cell phone increase the risk of developing auditory

nerve tumor by 50%

• Cell phone two-minute call affects a children alter brain function for an

hour.

• Cordless phones are even higher cancer risks than cell phones [3]

Russians have used this technology to intercept conversation within the

US embassy compound, result was the American embassy staff member

was diagnosed with leukemia and sent back to USA. The person

replacement got same disease too, Staff members of US Embassy also

complained other problem to them such as memory loss, brain fog, loss

of focus and insomnia during their services in the embassy [3].

These radiations are dangerous both for environment and for human.

Another issue is getting disposed of old computing devices which

Page 46: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Shazia Saqib

40

contain plastic another health hazard. The plastics heavy metals, gasses

create products that are dangerous for human beings. Electromagnetic

pollution is the most dangerous form of pollution as it is invisible and

insensible [3].

Besides the physical waste the other important concern due to increase

of such devices is the energy consumption, which increase day by day.

A huge number of waste produced are stored in attics, basements, office

closets and storerooms. These computers are made of over a thousand

materials, such as lead, cadmium and mercury. Some of these materials

are known to be highly toxic.

Lead from these devices prevents us from dumping these devices. If it

leaks into water systems, it can create havoc. Four to six-pound lead

contains in a monitor [4], and the mixture of phosphorous with the lead

protect user from radiation. The waste growth ratio of electronic

equipment in three time more than other municipal waste [5].

In the recent development in wireless and PDA technology reduced the

lifetime and cost of the devices. The devices changing with the user

requirements with minor updates. Like one system doesn’t has Bluetooth

and the other has.

2. FINDINGS

All radiation is not harmful which invisibly move through our body. We

know that invisible EMR waves can move easily through the concrete

walls of buildings. These EMR can easily pass through our body soft

tissue. Signal communication through barriers and obstruction connects

from tower to tower. Long-term EMR exposure risked and increased

causes our society sicker, some founding’s are listed. The high level

sicknesses of EMR Exposure are sleep disorders and insomnia,

headaches and migraines, immune system disorders, increased blood

pressure, Learning disabilities, heart disease, leukemia, fatigue memory

loss, depression, Loss of concentration, DNA damage, Brain tumors,

Alzheimer’s disease, Parkinson’s disease, Autism, hormonal imbalance

etc. [3]

Page 47: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Effects of Mobile Phone Radiation on Human Health

41

The survey shows that doctors and psychologists agree that mobile

phone is a hazard both to health and social values.

Mobile communication makes people introverted, isolated and increase

inability to deal in face to face communication and interaction. [6]

International Agency for Research on Cancer (IARC), report by World

Health Organization. WHO published in USA today, medical experts say

that cell phones are "possible carcinogens". Although there isn’t much

evidence. We’re not going to see the effects of the heavier users for a

decade or so. [7] [8]

Global System for Mobile Communication (GSM) and Universal Mobile

Telecommunication System (UMTS) phones user are considered in this

study. Literature shows that UMTS phone power is lower than the GSM

phone in the average use. We see the increase in the use of mobile phone

with the advancement in the technology but the other factor of time with

the exposure of the brain to radio radiation is decreased. [9]

The duration of minimum ten years casus the increased risk in

developing brain tumor of ten years. Various epidemiological studies

show links on long-term use of mobile phones and s increase in brain

tumor risk [10]. The number of indirect health hazards connected to

mobile phones which are caused just to use of it. Phone use while driving

vehicle is observed unusually which cased road accident, is the best

example of it.

Austria, France, and Germany national radiation advisory authorities

recommended to their citizens measures to minimize exposure. Use of

hands-free to keep mobile away from the head to decrease the radiation

and to keep the mobile phone away from the body. [2]

EEA in September 2007 recommended some awareness about the risks

of mobile phone uses, citizens should be familiar with this report. The

sufficient evidence in EEA of risk to society and especially children.

Mobile phone should not place in their heads: text messaging etc.

Hands-free kits reduce the radiation levels to about ten times, then when

the phone is pressed to the head.

Page 48: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Shazia Saqib

42

2. CONCLUSION

Governments should label mobile handsets as a ‘possible carcinogen’, in

line with the IARC decision. Our society needs to sit and chalk out rules

and ethics for mobile communication. Perhaps like smoking, they need

to put a warning may be on mobiles or may be at the start of every call,

that too much use of this device is dangerous for health as well as for

mental capabilities [11]. This can be concluded that the exposure of EMF

radiation cases due to heavy cellphones caution is needed using the cell

phones. More research work and development is needed for risk

assessment based on higher number of long-term users. [12]. There is a

dire need to setup bodies in collaboration with health department to

conduct research at serious level.

Page 49: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Effects of Mobile Phone Radiation on Human Health

43

REFERENCES

[1]: Health risks from mobile phone radiation – why the experts disagree.

(2011, OCT 12). Retrieved from www.eea.europa.eu/ News / Health

risks from mobile phone radiation – why the experts disagree.

[2]: The Cell Phone Poisoning of America. (2008). Logical Health LLC.

[3]: Mobile Phone Use and Brain Tumors in Children and Adolescents:

A Multicenter Case–Control Study. (2011). JNCI.

[4]: A. Raouf Khan, N. Z. (2008). Health Hazards Linked to Using

Mobile Cellular Phones. Journal of Information & Communication

Technology, 101-108.

[5]: Ashraf A. Aly, S. B. (2008). Research Review on the Biological

Effect of Cell Phone Radiation on Human. IEEE.

[6]: Cooper, C. (2001, September 10). Where do old computers go to

die? Retrieved from CNETNews.com.

[7]: FranzAdlkofer, I. B. (2009, March). How Susceptible Are Genes to

Mobile Phone Radiation? State of the Research–Endorsements of Safety

and Controversies–Self-Help Recommendations.

[8]: King, R. (2011, june 3). what-the-whos-cellphone-cancer-statement-

really-means.htm. Retrieved from www.ieee.org.

[9]: Mary Brophy Marcus, L. S. (2011, June 1). WHO Cellphones

possibly carcinogenic - USATODAY.com. Retrieved from

www.usatoday.org.

[10]: Ravi Jain, J. W. (2002). Environmental Design for Pervasive

Computing Systems. MOBICOM’ 02, 23-28.

[11]: Savita Chauhan, D. o. (n.d.). ENVIRONMENTAL AND HEALTH

HAZARDS OF MOBILE DEVICES.

[12]: Yilong Lu, Y. H. (2012). Biological Effects of Mobile Phone

Radiation. IEEE.

Page 50: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp. 44-53

44

A Quantum Optimization Model for Dynamic Resource

Allocation in Cloud Computing 5Tahir Alyas,

Nadia Tabassum,

Umer Farooq

Abstract: - Quantum Computing and Cloud Computing technologies

have potential capability to change the dynamic of future

computing. Similarly, both Complexities, time and Space are the basic

constraints which can determine the efficient cloud service performance.

Quantum optimization for the cloud resources in dynamic environment

provides a way to deal with the present classical cloud computation

model’s challenges. By combining the fields of quantum computing and

cloud computing, will result in evolutionary technology. Virtual resource

allocation is a major challenge facing cloud computing with dynamic

characteristics, a single aspect for the evaluation of resource allocation

strategy cannot satisfy the real world demands in this case. Quantum

Optimization resource allocation mechanism on the cloud computing

environment based two-way factors, improving user satisfaction and best

use of resource utilization of cloud computing systems.

A dynamic resource allocation mechanism for cloud services, based on

negotiation by keeping the focus on preferences and pricing factor is

therefore proposed.

—————————— ——————————

1. INTRODUCTION

Now a day, in distributed environment solving different type of

problems having complex computation becomes increasingly complex,

and dynamic behavior are making the requirements higher and higher,

so traditional theories and methods face serious challenges.

Tahir Alyas, Department of Computer Science, Lahore Garrison University, Lahore

Nadia Tabassum, Virtual University of Pakistan

Umer Farooq Department of Computer Science, Lahore Garrison University, Lahore

Page 51: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Tahir Alyas et,al

45

1.1. Cloud Services

During Computer era, speed, storage and transmission capacity has

turned into a reliable administrative service for extensive server farms,

end to end customers and all organizations. In the point of view of

administrations, Cloud registering can be recognized as

a Provision of any sort of programming administration

b Need base arrangement of the administration

c Service administration for PC assets

1.2. Characteristics of Cloud Computing

Main characteristics of the cloud computing are following:

1.2.1. On-demand self-service:

Provision of demanded cloud resources are made possible by cloud

service providers and is referred as the On-demand self-service. Online

services allow users to access these cloud services easily. The prime

component of on-request self-administration is offerings required

framework up to a client level without aggravating the host cloud

operations. Several computing resources are offered by On-demand self-

service like data storage, arrange transfer speed as independent process

without aggravating Cloud specialist co-ops and server time.

Adaptability of on-demand self-service is using servers whenever

required. [1].

1.2.2. Resource pooling:

Cloud vendors offer a multi-inhabitant framework that joins various

resources of computing to serve different purchasers and pooled assets

to meet the consumer demands. The buyer has no data or learning about

the cloud assets received. Capacity, registering, memory, organizing data

transfer capacity and virtual machines are basic cases in resource

pooling. Virtual assets are relegated progressively on client request by

adjusting the load sharing process.

Page 52: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

A Quantum Optimization Model for Dynamic Resource Allocation in Cloud Computing

46

1.2.3. Rapid elasticity:

Distributed computing Services can be shared and discharged naturally.

The abilities can be effectively changed flexibly to satisfy the customer

requests and needs whenever they want. Resources of distributed

computing are promptly accessible and they also enhance the throughput

in times of developing need in programmed way as per the client current

load.

1.2.4. Measured service:

Measured administrations permitting an observing plan for cloud asset

utilization. The Measured information is therefore utilized for resources

enhancement and their optimization, pay-per-utilize charging, Quality of

Service, enabling transparency of purchasers.

1.2.5. Broad network access:

Broad network access is one of the most effective characteristics of cloud

computing. Users have the facility to use different services of cloud

computing on several devices and required formats. Easy

communication is also possible in cloud computing through various

tablets, work-stations, laptops and smartphones.

1.2.6. Dynamic Resource Allocation:

The resources allocated for cloud environment will dynamically change

based on current environment. These services may be web server, ftp

server, virtual machines. A game may be constructed with a solution for

resource allocation strategy. Resource allocation tasks are mainly

computing tasks and involve the timing factor. When a new request for

resource task comes, conflict and negotiation will occur thus leading to

dynamical resource allocation.

“It is more efficient to realize resource discovery, resource matching,

task scheduling and execution. However, it is important and complicated

to allocate the resource to all the tasks in cloud computing. Cloud

computing service should make a resource assignment for every

Page 53: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Tahir Alyas et,al

47

computing resource that is capable for applicant along with the task

scheduling. Hence, the scheduling essence in cloud computing is the

capability resource allocation in computing.

2. IMPORTANCE OF RESOURCE ALLOCATION.

Resource Allocation is a mechanism for providing available resources

when demanded in cloud applications over the internet. In cloud

environment resource allocation, the results will be the effect of poor

allocations done that are not managed properly. Efficient resource

planning will as a whole enable the service providers to manage the

resources and solve several related problems [2].

Various resource allocation strategies are available for collaborating the

cloud provider activities for allocating and utilizing resources with the

limitation to meet the needs of the cloud application of the cloud

environment. Keeping in view the amount of resources needed by each

application for fulfilling a user job the task can be completed. The order

and time of allocation of resources are also an input for an optimization

of resource. The following criteria should be avoided for optimization of

resources in cloud environment:

a) Resource contention (same resources deadlock), when more than one

cloud applications try to use the similar resource at one time.

b) Shortage of resources (limited VM) occur when there are limited

resources.

c) Resource fragmentation (plenty of resources available). Enough

resources availability but unable to allocate the required resources to the

current application.

d) When a certain application gets surplus resources than the demanded

one it leads to the over-provisioning of the resources.

e) When the application is assigned with fewer numbers of resources

than required under-provisioning of resources occurs.

3. LITERATURE REVIEW

As cloud computing is gaining popularity day by day, it is hard to

maintain privacy in datasets while providing adequate retrieval and

searching procedures. Steven Zittrower introduces a novel approach in

Page 54: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

A Quantum Optimization Model for Dynamic Resource Allocation in Cloud Computing

48

the field of encrypted searching that allows encrypted phrase searches

and proximity ranked multi-keyword searches to encrypted datasets on

untrusted cloud [3].

Cloud computing is sharable resources and internet-based computing.

Quality of Service (QoS) rankings in cloud services gives valued

information for selecting particular cloud service from a set of available

services. For QoS ranking, it usually requires high computational cost

and time consuming. Yaqoob and Singh presented a novel framework

for ranking of cloud services in their paper “Better Ranking of Qos

Feedback System in Cloud Computing “. They proposed QoS ranking to

forecast the QoS rankings directly using two personalized prediction

approaches. One is calculating similarity values with other training

values and similar score can be identified. Secondly, cloud services

provider is to consider user feedback about cloud services and give them

ranking [4].

Encryption also known as encoding protects content or data by making

it impossible for someone unauthorized to understand or identify this

data as different or distinct. Critical transmission after conversion into

encrypted form using cryptography is something not new rather it’s

being done for a very long time say since ca 400 BCE when armies used

cryptography to avoid inconvenient discovery of their communication.

For the next millennia it was believed that encrypted information cannot

be identified until decrypted into their original understandable language.

This was an acceptable approach until recent when it became

impracticable to conveniently decrypt a large number of encrypted

documents to decode some specific encrypted keywords rather than

decrypted an entire document.

To solve this problem searchable encryption was introduced but until

now it failed to meet expectations in regards to various needs to perform

accurate searchable encryptions. Most techniques in searchable

encryption make use of mathematical structures such as Bloom filters or

trap-doors but this approach did not work as well because they facilitate

only Boolean searches and do not support most wanted phrase searching.

Page 55: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Tahir Alyas et,al

49

This approach did not focus on sub word matching, exact matching,

regular expressions, natural language searches, frequency ranking, and

proximity-based queries; this limitation on these forms of searching

makes it hard for this method to be accepted as a common practice

because these all search formats are now considered by search engines

and modern users expect to have them [3].

4. OPTIMIZATION PROBLEM

Optimization problem for the cloud in resource allocation will aim to

optimize the use of different resources like Job completion time,

topology of the cloud, minimum execution time, Resource availability,

Request satisfaction are considered [5].

The task allocator is using knowledge base of all the VM types that the

user can request and the host machine can configure. This information is

used in the computation of the optimization matrix. All types of

negotiation can be done on priority and pricing factors as shown in fig.

Knowledge Base Optimization

Metric

VM VM VM VM VM VM

Task Allocation

Preference Negotiation

Pricing

Task 1

Figure 1 Task Optimization

Page 56: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

A Quantum Optimization Model for Dynamic Resource Allocation in Cloud Computing

50

Preference lists are used by the host machines to check for the task

allocation accuracy and reliability by means of VM with respect to the

task assigned.

5. CLOUD SERVICE LOGICAL ARCHITECTURE

In cloud computing most of the services are independent of each other

that are used for resource provision. The different component in cloud

service can work with loose-coupling integrated using conditional

services. Complex cloud-based services include several components as

shown in figure 2. Collaboration among different services during

allocation of resources must be considered in this regard.

Quantum Cognition is a research field which uses mathematical aspects

of the quantum theory to perfect the design of cognitive phenomenon. It

includes human thought process, memory, conceptual reasoning,

judgment and perception, and decision making. It is clearly distinguished

from quantum mind which is based on microphysical quantum

mechanical hypothesis rather than a macro-physical operating system of

the human brain.

Service

1

Service

2 Service Service

n

Componen Component Component Componen Componen

VM 1 VM 2 VM 3 VM

Figure 2 Cloud Service Logical Architecture

Page 57: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Tahir Alyas et,al

51

Quantum cognition is structured on the generalized quantum paradigm

and that information processing contextual dependence of information

and probabilistic reasoning can be scientifically explained using the

framework from quantum information and the quantum probability

theory

6. VIRTUAL MACHINE MIGRATION AFTER RESOURCE

ALLOCATION.

After the resource allocation is determined, how to maximize each

virtual machine’s utility is the next focus. In this part, we will mainly

work on the virtual machine migration problem where many virtual

machines will compete for the limited resources [6].

7. PROPOSED METHODOLOGY

User task is submitted to quantum optimization model for resource

allocation, how resources will be allocated to the submitted task. These

tasks will be categorizing on the basis of execution time. Execution time

will be measured on the basis of existing task mapping in the system.

“The optimum resource allocation is exponential in huge systems like

big clusters, data centers or Grids. Since resource demand and supply

can be dynamic and uncertain, various strategies for resource allocation

are proposed”. User submitted task is the inputs to the system. These

inputs are measured against the resources like hardware to the system.

Task is also measured on the basis of certain parameters like type of

service, cost and security. In cloud system trust can be recognize by

meeting all parameters of SLA (service level agreement) and providing

successful transaction between interacting parties. SLA parameters are

response time, through put and quality of service.

Resource manager then forward these tasks to the quantum task

optimizer. Task optimizer will characterize the task on the basis of long

short or medium jobs in order to execute them If more than one jobs are

available in any category is available like long median of short. these

will be in queue in order to execute or proceed for further processing.

Finally, quantum job optimizer will distribute these jobs of task to the

cloud environment.

Page 58: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

A Quantum Optimization Model for Dynamic Resource Allocation in Cloud Computing

52

8. CONCLUSION

Resource allocation mechanism using quantum optimization, may

perform better with respect to lower task allocation time, lower resource

wastage, and higher user request satisfaction.

The Significance and demand of resource allocation scheme in cloud

computing. Resource allocation on the cloud aims at avoiding

underutilization of resources.

User Submitted task

Resources Manager

Execution Time

Match

Making

Policy

Security

…………

Processo

r

VM

Type

……

Cost

…….

Spee

d

Hardware Resource

CPU

……

I/O

…….

Storage

……..

Communicatio

n

SL

A

Response

Time

……

Throughput

…….

QOS

Quantum task optimizer

Long job

Task Queue

Medium

job

Small job

Task Queue

Cloud Environment

Figure 3 Quantum Task Optimization Proposed Model

Page 59: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Tahir Alyas et,al

53

REFERENCES

[1] G. Zhang, C. Li and C. Xing, "A Semantic++ Social Search Engine

Framework in the Cloud," in CPS, 2012.

[2] A. Karthick, D. Ramaraj and R. Kannan, "An Efficient Tri Queue

Job Scheduling using Dynamic Quantum Time for Cloud

Environment," IEEE, pp. 871-877, 2013.

[3] S. Zittrower and C. Zou, "Encrypted Phrase Searching in the

Cloud," IEEE, pp. 764-771, 2012.

[4] T. M. Singh and S. I. Yaqoob, "Better Ranking Of Qos Feedback

System in Cloud Computing," International Journal of Advanced

Research, pp. 1128-1136, 2014.

[5] J. Dai, B. Hu and L. Zhu, "Research on Dynamic Resource

Allocation with Cooperation Strategy in Cloud Computing," in

International Conference on System Science, Engineering Design

and Manufacturing Informatization, 2012.

[6] B. Kumar Ray and S. Khatua, "Negotiation Based Service

Brokering Using Game Theory," IEEE, pp. 1-8, 2014.

Page 60: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp. 54-61

54

Energy Efficient Schemes for Wireless Sensor Network

(WSN)

Sadia Batool,

*6Mohtishim Siddique

Abstract: - Conservation of energy is the main design issue in wireless

sensor network (WSN) which is usually available at each node. Although

different solutions have been introduced for typical wireless networks,

cellular networks, MANET, and other short-range wireless local area

networks, yet they are not often much feasible for a large scale WSN.

For this purpose, multiple mobile sink nodes can be deployed to increase

the life of sensor network. The purpose can be achieved by splitting the

life time into equal time interval known as rounds. Similarly, by

employing multiple sink nodes can also make the sensor network more

energy efficient. Another way to make the sensor network energy

efficient is to logically divide the deployment area into static clusters. By

adopting the strategy of static cluster, energy consumption can be

minimized. The two major wireless standards used by WSN are 802.15.4

and Zigbee [1],[2] .They are low-power protocols. Maximum distance is

around 100m (at 2.4 GHz). However, performance is an issue. In order

to assure the Wireless sensor network (WSN)s survivability and increase

the lifetime of network in such environments, various energy efficiency

schemes have been proposed in the literature. Energy is a valuable

commodity in wireless networks due to the limited battery of the handy

devices. The energy problem becomes stiffer in ad-hoc WSN).

Keywords: WSNs, clustering scheme, energy efficient designs.

—————————— ——————————

Sadia Batool, Mohtishim Siddique

Department of IT & CS

Minhaj University

Lahore, Pakistan

[email protected]

[email protected]

Page 61: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Sadia Batool & Mohtishim Siddique

55

1. INTRODUCTION

Typically WSN [3] is composed of sink nodes and different numbers of

wireless sensor nodes. All nodes have the ability to collect and process

data independently. Wireless sensor systems made out of many sensor

hubs which sense the physical environment regarding temperature,

dampness, light, sound, vibration, and so forth. The fundamental

assignment of sensor hub is to accumulate the information and data from

the sensing field and send this to the end client by means of sink node.

These sensor hubs can be conveyed on numerous applications. Now a

day, wireless sensor network (WSN) is working to remove low power

issues, communication, sensing, storage of energy, and computation.

Figure 1: Communication of Nodes and Sink Node

A Wireless sensor network (WSN) is a self-arranging system of nodes

conveying among themselves utilizing radio signals, and sent in amount

to sense, monitor and understand the physical world. Wireless Sensor

hubs are called motes. WSN have a wide range of applications to

different industries, field of science, transportation, infrastructure, and

security. Usually, client node has limited source of energy and these are

installed in a less responding areas. That is why, recharging is not

possible frequently and for the performance of long-term applications,

efficient energy solutions are required.

2.1. Structure of wireless node

A WSN is normally a fixed ad-hoc network [4] that comprises of

hundreds of nodes. Each node has a transferring device

(communication), a wireless transmitter/receiver (transducer) whose

range is short, a low power processor and most important a limited

source of energy (power). The transducer has an analog to digital

Page 62: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Energy Efficient Schemes for Wireless Sensor Network (WSN)

56

converter(ADC)[5]. These nodes are capable of monitoring surrounding

area and process the data collected and then sends that data towards sink

node of that network. These sink nodes transmit the data received from

nodes to some other control station.

Figure 2: Structure of a Node

The function of these four modules (transducer, processor,

communication, power) relies on the performance of node. A node is

able to work in three dimensions: collection of data, head of cluster and

data relay. If it works for the collection of data, then it will directly have

passed the data to communication module to send. A cluster head or sink

node has to collect and process data that is received from the members

of network. Similarly, if a sensor is performing the task of a switch/relay

then data will be received from neighbor nodes and will be sent again to

sink or other nodes.

2.2. Static Cluster Limited energy source is provided to Nodes that is why efficient energy

utilization is very important in this area. In this paper, different schemes

will be discussed to solve the problem of unbalanced energy utilization

Page 63: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Sadia Batool & Mohtishim Siddique

57

which results in less efficient in energy consumption. Concept of Static

Cluster [6] is introduced which is done by dividing the areas into sub

areas and then these sub areas are called static cluster. This division can

control the coverage problem along with the efficient energy

consumption.

3. NOVEL LOAD BALANCING SCHEME

Load balancing scheme is used in wireless sensor network if the aim is

to provide the constant and reliable service. All those applications who

generate periodic data for wireless sensor network requires the maximum

possible life of a network. In wireless sensor networks(WSN), energy is

assumed to be a strictly limited source. But the improved lifetime of the

nodes is required to optimize the performance of energy consumption.

That is why, Novel load balancing scheme [7] is used to balance the

consumption of energy in wireless sensor network (WSN)s.

3.1. Energy Problems

Every component of a system can be designed in an optimized way to

reduce energy consumption. Energy is consumed when a node sense,

communicate and process data. Efficient Algorithms can be designed to

improve energy savings. Normally, the process which takes maximum

power consumption is ‘communication’ rather than sensing or data

processing. That is why distributed and localized algorithms are needed

for different levels of communications [5]. A simple model for the

consumption of energy is proposed by [8] in which energy dissipated by

a node is simulated while sending and receiving data.

3.2. Different Protocols used in WSN

Different protocols are being used in Wireless sensor network (WSN).

1. Protocol Stack

2. MAC protocol

3. TDMA based MAC Protocol

4. Power Aware Clustered TDMA(PACT)

5. Sensor-MAC

6. ALOHA with Preamble Sampling

Page 64: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Energy Efficient Schemes for Wireless Sensor Network (WSN)

58

In WSN, different solutions have been proposed for Medium Access

Control protocol. Super Frame Time Scheduling[9] is designed to solve

the issue of energy consumption. PACT (Power Aware Clustered

TDMA) [10] is joint with an energy-efficient TDMA-based time

schedule to reduce the overall utilization of energy in large-scale WSN.

One another advancement in Medium Access Control protocol was

designed especially for WSN is S-MAC (Sensor-Medium Access

Control). It is a mixture of reservation and contention based schemes

planned to reduce the energy disorders that were produced due to the

collisions occur in network, overheads of control packets, and idle

listening[11].

One other protocol used in wireless networking is the Areal Locations of

Hazardous Atmospheres(ALOHA) with Preamble Sampling scheme that

is joined with the standard ALOHA protocol with the preamble

sampling technique [12] minimize the error rate and maximize the

performance.

A prime choice for the Wireless Sensor Networks is to use the Timed

Division Multiple Access (TDMA) based MAC schemes (Medium

Access Control) because radios can be turned off during their idle times

in order to conserve energy by using E-TDMA.

The Energy-efficient TDMA (E-TDMA) is an extension to the classical

TDMA to minimize the consumption of energy which is due to idle

listening: it keeps the radio off when it has nothing to transmit.

4. ROUTING PROTOCOLS

1. Conventional [13]

a. Multi-hop Routing

b. Direct Communication with sink node

c. Static Clustering

2. Low Energy Adaptive Clustering Hierarchy -LEACH

3. Threshold Sensitive Energy Efficient Sensor Network[12]-

TEEN

5. TDMA AND ENERGY EFFICIENT TDMA

In classical Timed Division Multiple Access, a node has to keep its radio

on during the time slot allocated to it whether there is any data to send

or not. That is why a lot of energy is consumed due to this idle work of

sensing.

Page 65: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Sadia Batool & Mohtishim Siddique

59

On the other hand, E-TDMA outperforms the basic TDMA to lessen the

energy consumption due to idle state of the node. In this scheme, node

keeps its radio off when it has no data to send during the time slot

allocated to that node.

5.1. BMA-MAC: An Energy Efficient Scheme

Keeping in mind the above mentioned problems, a new extension of

MAC protocol is developed to reduce the consumption of energy. The

core purpose is to reduce the energy wastage due to idle state of node

and collisions occur during the communication while stabilizing a

reasonable performance.

Rounds are introduced in BMA-MAC in which operation is divided it is

done in LEACH [14]. Two phases are offered at each round which is Set-

up phase and Steady-state phase.

6. CONCLUSION

In this paper, different schemes are discussed which are giving

performance but the problem exists with the high consumption of energy.

TDMA AND E-TDAM techniques are also discussed which are more

energy efficient than other schemes. A new technique of BMA which is

an extension to MAC has performed better than simple MAC protocol.

Basically it is planned for programs and applications which are event-

driven, where if good events are detected only then data is transmitted to

cluster head. If we analyze the performance of the Bit Map Assisted

Medium Access Control, it is energy efficient than Time Division

Multiple Access and Energy Efficient Time Division Multiple Access.

Energy is saved from the idle periods and hence improved the overall

performance.

Similarly, energy consumption of E-TDMA always improved than

TDMA. E-TDMA scheme keep the node’s radio off when there is no

data to transmit. Similarly, both Bit Map Assisted Medium Access

Control and Energy Efficient Time Division Medium Access can be joint

to form a more efficient scheme in which E-TDMA can be used in large

rounds while BMA can be utilized where rounds are small or medium.

Page 66: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Energy Efficient Schemes for Wireless Sensor Network (WSN)

60

REFERENCES

[1]: Kumar, N., et al., GESTURE CONTOLLED ROBOTIC ARM USING

WIRELESS NETWORKS. GESTURE, 2016. 3(1).

[2]: Kinney, P. Zigbee technology: Wireless control that simply works.

in Communications design conference. 2003.

[3]: Stallings, W., Wireless communications & networks. 2009: Pearson

Education India.

[4]: Gandham, S.R., et al. Energy efficient schemes for wireless sensor

network (WSN)s with multiple mobile base stations. in Global

telecommunications conference, 2003. GLOBECOM'03. IEEE. 2003.

IEEE.

[5]: Li, J. and G.Y. Lazarou. A bit-map-assisted energy-efficient MAC

scheme for wireless sensor network (WSN)s. in Proceedings of the 3rd

international symposium on Information processing in sensor networks.

2004. ACM.

[6]: Ye, M., et al. EECS: an energy efficient clustering scheme in

wireless sensor network (WSN)s. in PCCC 2005. 24th IEEE

International Performance, Computing, and Communications

Conference, 2005. 2005. IEEE.

[7]: Kim, H.-Y., An energy-efficient load balancing scheme to extend

lifetime in wireless sensor network (WSN)s. Cluster Computing, 2016.

19(1): p. 279-283.

8.Heinzelman, W.R., A. Chandrakasan, and H. Balakrishnan. Energy-

efficient communication protocol for wireless microsensor networks. in

System sciences, 2000. Proceedings of the 33rd annual Hawaii

international conference on. 2000. IEEE.

[9]: Sohrabi, K. and G.J. Pottie. Performance of a novel self-organization

protocol for wireless ad-hoc sensor networks. in Vehicular Technology

Conference, 1999. VTC 1999-Fall. IEEE VTS 50th. 1999. IEEE.

[10]: Pei, G. and C. Chien. Low power TDMA in large wireless sensor

network (WSN)s. in Military Communications Conference, 2001.

MILCOM 2001. Communications for Network-Centric Operations:

Creating the Information Force. IEEE. 2001. IEEE.

[11]: Iyengar, S.S. and R.R. Brooks, Distributed sensor networks: sensor

networking and applications. 2016: CRC press.

[12]:Schurgers, C. and M.B. Srivastava. Energy efficient routing in

wireless sensor network (WSN)s. in Military communications

conference, 2001. MILCOM 2001. Communications for network-centric

operations: Creating the information force. IEEE. 2001. IEEE.

Page 67: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Sadia Batool & Mohtishim Siddique

61

[13]: Mahgoub, I. and M. Ilyas, Sensor network protocols. 2016: CRC

press.

14.Razaque, A., et al. P-LEACH: Energy efficient routing protocol for

Wireless sensor network (WSN)s. in 2016 IEEE Long Island Systems,

Applications and Technology Conference (LISAT). 2016. IEEE.

Page 68: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp.62-68

62

Face and Face Parts Detection in Image Processing 7Mirza Shahwar Haseeb,

Rana Muhammad Bilal Ayub,

Muhammad Nadeem Ali

Muhammad Adnan Khan

Abstract: This paper based on a procedure for automatically detecting one or

more human faces, eyes pair, nose, mouth in colour images. This is depending on two-method which first detects regions of face, eyes pair, nose, mouth contain

human skin in the colour image and then extracts information from these regions and then detect the regions which include face, eyes pair, mouth, nose in the

colour image. The face, eyes pair, mouth, nose identified is completed on a

colour image having only the identified image parts. An arrangement of thresh holding and calculated values and some functions are used to remove item

structures that would show the existence of an identified area.

Keywords— Vision.CascadeObjectDetector, Step, Rectangle.

—————————— ——————————

I. INTRODUCTION

In recent years, it’s most important and difficult how to control the security of

information or systems. In recent years we know many law-breaking of

computer, breakings by hackers, or security breaks in a business. In all these

problems, the law-breaker were taking advantage of these systems and get access

control in the system with the hack some information like ID cards, keys,

passwords, PIN numbers. Now solution of these problem introduce a technology

become giving confirmation of ‘’true’’ separate [1]. This technology is also

known as "bio-metrics". Biometric mean entrance control are automatic systems

of verifying or identifying such as thumb print or facemask structures, or some

features of the somebody's performance, similar his/her writing style or designs.

Face recognition is an example of biometric process that take the merits of both

high correctness and low insensitivity. In newest years, face matching or its steps

like (detection) has most search takings fast through not only computer science

Mirza Shahwar Haseeb, Rana Muhammad Bilal Ayub,

Muhammad Nadeem Ali and Muhammad Adnan Khan

Department of Computer Science, Lahore Garrison University

Lahore, Pakistan

[email protected],

Page 69: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Mirza Shahwar Haseeb et,al

63

people but also scientist who need it, after this process it takes many expected

uses in computer vision system and programmed access control system.

Specially, face, eye, nose, mouth identifying is a first step of automatically face

matching. But face, eye, nose, mouth identifying is not directly handle because

it has many of differences of image points, such as difference face pose (front,

non-front).8

The process for detection of faces, eyes pair, mouth, nose in this project was

based on a two-step approach. First, the image is filtered or determine and

checked is any parts of face or face in the image and, if available, return the

image point and size of every faces or its parts and tell whether there is any

human face or face parts, if there is, where it is (or where they are).

Recognize and find people face in an image anyway of their point, scale, in-

plane variation, angle, position (out-of-plane variation).

In this paper we presented a practical implementation of a frontal view face

detection algorithm based on Viola-Jones approach using Matlab cascade object

detector. Using the Matlab system object vision.CascadeObjectDetector, a face

detector was developed configured to proceed the user organization model

specified in the input file. The file is made with the help of the

vision.CascadeObjectDetector function. The attentional cascade training is done

using a set of positive samples (windows with faces) and a set of negative images

[2]. For obtaining a more accurate detector, the number of cascade layers and

the function parameters were tuned. Finally, for different tuning parameters the

performances of the face detector were analyzed. This paper is prepared as

follows. In Section II, presented Viola-Jones face detection algorithm. Section

III illustrates the implementation of the Viola-Jones algorithm using Matlab

cascade object detector. In section IV, we tested our proposal face detection

system. Section V describes the conclusion and the future work.

2. VIOLA – JONES ALGORITHM

The Viola – Jones algorithm is introducing for real – time detection of faces, eye

pair, nose, mouth from an image. Its real – time performance is obtained by using

Haar type features, computed rapidly by using integral images, feature selection

using the AdaBoost algorithm (Adaptive Boost) and faces, eye pair, nose, mouth

detection with attentional cascade.

2.1 Feature calculation

Page 70: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Face and Face Parts Detection in Image Processing

64

Starting from the common characteristics of the faces, such as the area around the cheeks is whiter than the eyes or the area of the nose is brighter than those of the eyes, five Haar masks (Fig. 1)

Figure 1 Five Haar Mask

Were chosen for determining the features, calculated at different positions and

sizes. Haar features are calculated as the alteration between the addition of the

pixels from the white area and the addition of the pixels from the dark area. In

this way, it is possible to detect contrast differences.

Figure 2 Detect Contrast

The addition of pixel in the D rectangle can be resolved by the four array

positions like (A, B, C, D). The addition of pixel rectangle A at position 1, the

addition of pixel at position 2 is A+B, the addition of pixel at position 3 is

A+B+C, the addition of pixel at position 4 is A+B+C+D. The value of D can be

calculated as 4+1-(2+3).

Page 71: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Mirza Shahwar Haseeb et,al

65

2.2 Attentional Cascade

A strong classifier will result that classifies the windows of N x N size well

enough. Since, on average, only 0.01% of the windows are positive images,

meaning faces, only potentially positive windows must be examined. Instead, to achieve a higher detection rate and a smaller misclassified images detection rate, we should use another strong classifier that classifies correctly the before misclassified images. This creates the attentional cascade, as showed in Fig. 4. At the first layer of the attentional cascade, a strong classifier with few features is used, which will filter/reject most negative windows [3]. A cascade of classifiers that are becoming more and more complex (with more features) will follow and they will allow to achieve a better detection rate. At each layer of the cascade, the negative images classified correctly will be eliminated and the new strong classifier will have a more difficult task than the previous step classifier.

Finally, the cascade of classifiers will operate as follows:

- the image will be split into multiple windows;

- every window is an input in the attentional cascade; - at every layer, the window is checked if it contains a face or not – according to the strong classifier;

- if it is negative, the window is rejected and the steps will be repeated for another window;

- if it is positive, it means that the window is a possible face and will move to the next layer of the cascade;

- The window contains a face if it passes all layers of the attentional cascade.

3. DETECTOR IMPLEMENTATION USING MATLAB

3.1 Face Detection: For creating the system object that detects faces from an image, using the Viola – Jones algorithm, the following command is used:

FaceDectect = vision.CascadeObjectDetector

('FrontalFaceCART','MergeThreshold', 8);

After the creation of the detector, the step process is called by the following

syntax:

FBOX = step (detector, I)The FBOX object return, an M – by – 4 matrix

describing M bounding boxes having the detected objects. Every row heaving 4

components [x y width height], that detect in pixels, the upper-left angle and

Page 72: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Face and Face Parts Detection in Image Processing

66

size of a bounding box. In order to use a detector obtained from training, the following steps are done:

i) open the desired image; ii) create the detector object; iii) identify faces from the image; iv) annotate the faces; v) Show image with annotated faces.

3.2 Eye Detection: For creating the system object that detects faces from an image, using the Viola – Jones algorithm, the following command is used: EyeDectect = vision.CascadeObjectDetector ('EyePairBig','MergeThreshold', 8);

After the creation of the detector, the step process is called by the following syntax:

EBOX = step (detector, I)

The EBOX object return, an M – by – 4 matrix describing M bounding boxes having the detected objects. Every row heaving 4 components [x y width height], that detect in pixels, the upper-left corner and size of a bounding box. In order to use a detector obtained from training, the following steps are done:

a. open the desired image; b. create the detector object; c. identify faces from the image; d. annotate the faces; e. Show image with annotated faces.

3.3 Nose Detection: For creating the system object that detects faces from an image, using the Viola – Jones algorithm, the following command is used:

NoseDectect=vision.CascadeObjectDetector ('Nose','MergeThreshold', 20);

After the creation of the detector, the step method is called by the following syntax:

NBOX = step (detector, I)

The NBOX object return, an M – by – 4 matrix describing M bounding boxes having the detected objects. Every row heaving 4 components [x y width height], that detect in pixels, the upper-left corner and size of a bounding box. In order to use a detector obtained from training, the following steps are done:

Page 73: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Mirza Shahwar Haseeb et,al

67

a open the desired image; b create the detector object; c identify faces from the image; d annotate the faces; e Show image with annotated faces.

3.4 Mouth Detection: For creating the system object that detects faces from an image, using the Viola – Jones algorithm, the following command is used: MouthDectect=vision.CascadeObjectDetector ('Mouth','Merge Threshold', 95);

After the creation of the detector, the step method is called by the following syntax:

MBOX = step (detector, I)

The MBOX object return, an M – by – 4 matrix describing M bounding boxes having the detected objects. Every row heaving 4 components [x y width height], that detect in pixels, the upper-left corner and size of a bounding box.

In order to use a detector obtained from training, the following steps are done: a open the desired image; b create the detector object; c identify faces from the image; d annotate the faces; e Show image with annotated faces.

4. EXPERIMENTAL RESULTS

The proposed face detection algorithm based on Viola-Jones was implemented using Matlab cascade object detector with different setting parameters of the Matlab function vision. CascadeObjectDetector, resulting many face, eye pair, nose, mouth detectors.

Page 74: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Face and Face Parts Detection in Image Processing

68

Figure 3 Results Face and Face Parts Detection

5. CONCLUSIONS Using vision Cascade Object Detector, a Matlab object system, face, eye pair, nose, mouth detector based on Viola-Jones algorithm has been developed. Starting from several pertained classifiers for detecting frontal faces.

REFERENCES

[1]: S.-H. Lin, "An Introduction to Face Recognition Technology IC Media

Corporation," IC Media Corporation, vol. 3, no. 3, pp. 1-7, 2000.

[2]: E. A. a. C. Lazar, "A Practical Implementation of Face Detection by Using

Matlab Cascade Object Detector," in 19th International Conference on System

Theory, 2015.

[3]: D. S. V. Ridhi Jindal Anuj Gupta, "Face Detection using Digital Image

Processing," International Journal of Computer Science and Software

Engineering, vol. 3, no. 11, November 2013.

Page 75: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LGURJCSIT

Volume No. 1, Issue No. 1 (Jan-March 2017) pp.69-82

69

Denoising of 3D magnetic resonance images using

non-local PCA and Transform-Domain Filter 9Laraib Kanwal,

Muhammad Usman Shahid

Abstract: - The Magnetic Resonance Imaging (MRI) technology

used in clinical diagnosis demands high Peak Signal-to-Noise ratio

(PSNR) and improved resolution for accurate analysis and treatment

monitoring. However, MRI data is often corrupted by random noise

which degrades the quality of Magnetic Resonance (MR) images.

Denoising is a paramount challenge as removing noise causes

reduction in the fine details of MRI images. We have developed a

novel algorithm which employs Principal Component Analysis

(PCA) decomposition and Wiener filtering. We have proposed a two

stage approach. In first stage, non-local PCA thresholding is applied

on noisy image and second stage uses Wiener filter over this filtered

image. Our algorithm is implemented using MATLAB and

performance is measured via PSNR. The proposed approach has

also been compared with related state-of-art methods. Moreover, we

present both qualitative and quantitative results which prove that

proposed algorithm gives superior denoising performance.

Keywords:- MRI, PCA, Denoising, BM4D, PRI-NL-PCA, Wiener

filter.

1. INTRODUCTION

Magnetic Resonance Imaging (MRI) is three-dimensional imaging

technique extensively used in clinical diagnosis. It is superior to other

diagnostic modalities due to its ability to easily differentiate and

highlight tissues. MRI scanners form an image with the help of

magnetic field and radio waves. When human body is placed in

Laraib Kanwal, Muhammad Usman Shahid

Electrical Engineering Department

National University of Computer & Emerging Sciences

Lahore, Pakistan

Page 76: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

70

strong magnetic field (1.5 Tesla) provided by MRI device then radio

frequency pulses fall incident on the target area. As a result, waves are

reflected and absorbed. The reflected waves are detected and 3D image

in formed on scanner.

The accuracy of MRI images is seriously degraded by random noise

during acquisition, transmission and storage. As a result, the

reliability of important tasks (registration, segmentation,

visualization) decreases. The major noise in MRI is thermal noise (also

known as Johnson noise). Generally, there are two approaches to

reduce noise and improve SNR of MRI images. One way is to average

the acquired data multiple times. However, this method increases the

acquisition time whereas medical images demand quick response.

Sometimes MRI scan process can go from 20 minutes to many hours.

Long acquisition time results in increased electricity and time cost as

well.

There are various methods for MRI denoising. They can be classified in two broad categories that are filtering and transform domain. Non-Local Means (NLM) filter proposed by Buades et al, performs best in filtering domain while contourlet transform gives good denoising performance in transform domain [1].

The reminder of the paper is organized as follows. In Section II we present the literature review of all denoising algorithms. Section III describes the proposed method and its implementation. The experimental results on phantom dataset are analyzed in Section IV. Finally, Section V concludes the paper.

2. LITERATURE REVIEW A lot of work has been done for denoising MRI images. Many variations of NLM algorithm have been proposed. Manjon et al, proposed a modification of NLM algorithm for denoising multispectral MRI images. This method is termed as Multispectral NLM (MNLM) algorithm. Most of the MRI denoising methods are applied on single channel images ignoring the multispectral nature of these images. In multispectral MRI images, information of many channels are combined [2]. This paper focuses on noise reduction and SNR improvement in multispectral MRI images. MNLM algorithm is less efficient for noise removing but iterations can be increased for good results.

Page 77: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Denoising of 3D magnetic resonance images using non-local PCA and Transform-Domain Filter

71

Coupe et al. [3] proposed multithreading approach to overcome the main

drawback of the NLM algorithm. For 3D MRI data, the computation of

NLM algorithm takes significant amount of time so computational

burden increases. Hence, optimized version of NLM filter reduces

computational time by factor of 50 using parallelized and optimized

implementation. Later Coupe et al, extended the work by further

reducing the computational time by the factor of 60. This is achieved by

optimized pre-selection of voxel along with block-wise implementation

[4]. Coupe et al, further continued the work by introducing wavelet sub-

bands mixing (WSM) in fully automatic 3D optimized block-wise NLM

filter. In this technique, the image quality and computational time are

being enhanced. NLM filter combined with wavelet decomposition

outperforms the optimized block-wise NLM implementation [5].

Gal et al, proposed an algorithm called Dynamic Nonlocal Means

(DNLM) for removing redundancy (time axis) [6].

DNLM algorithm makes use of information redundancy in data volume

acquired at different time intervals. DNLM algorithm achieves good

Figure 1. Block Diagram of Proposed Method

Page 78: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

72

denoising performance visually. Moreover, it has fast execution time

as compared to previously proposed methods. Liu et al, proposed Pre-

processed Enhanced NLM filter (PENLM) for 3D MR images. In this

filter, the image is first pre-processed with Gaussian filter then NLM

filter is applied to squared magnitude image. The introduction of

Gaussian filter before NLM filter reduces the noise disturbance [7].

Hu et al, used combination of NLM filter with the Discrete Cosine

Transform (DCT). In this method, DCT transforms the image from time

domain to frequency domain. DCT has promising properties of

excellent energy compaction and dimensionality reduction which is

used to suppress noise. Then NLM filter is applied and similarity

weights are calculated to this lower- dimensional sub-space of DCT. The

traditional NLM filter computes similarity weights in gray level

information rather than DCT sub-space. As a result, the proposed filter

gives better denoising performance [8].

Manjon et al, proposed a new approach for an efficient 3D MRI

denoising. It is based on two important properties of MR images that is

sparseness and self-similarity. The 3D MRI image is initially pre-filtered

by DCT, followed by hard thresholding then NLM filter is applied. So,

Pre-filtered NLM 3D (PRI- NLM3D) algorithm is proposed [9].

Alessandro presented Block Matching 4D (BM4D) algorithm which

implements grouping and collaborative filtering. Similar cubes are

grouped together in a stack and filtered in transform domain. The 4-D

transform applied on the similar groups exploit local as well as non-local

correlation present among voxels of each cube and between

corresponding voxels of different cubes. BM4D is composed of two

cascaded stages: Hard-thresholding stage and Wiener-filtering stage.

First filters the noisy image using hard-thresholding and second uses

collaborative wiener filtering for coefficient shrinkage. The algorithm

removes both Gaussian and Rician Noise [10].

José V. Manjón proposed a method that uses lower dimensional

subspace. This method has two stages: Non-local Principal Component

Analysis (PCA) thresholding and NLM filtering. The noisy image is first

pre-filtered by median filter then subsequent steps include: grouping,

Page 79: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Denoising of 3D magnetic resonance images using non-local PCA and Transform-Domain Filter

73

PCA decomposition, hard thresholding and aggregation. For PCA

thresholding strategy, the local noise level is automatically estimated in

the image. Then this filtered image is passed to second stage which

comprises of rotational invariant NLM filter [11]. This method has been

compared with related state-of-art algorithms and shows competitive

results.

3. PROPOSED METHOD In the proposed method, similar cubes are grouped together prior performing median filtering. A simple 3D median filter improves the grouping process and provides sparser representation for denoising. PCA decomposition is performed on every created group of similar cubes. Eigenvectors obtained as a result of PCA decomposition are then hard thresh-holded to remove less significant components. Finally, all estimates are combined together using uniform averaging rule and denoised image is obtained. This method is referred as Pri-Non Local PCA (PRI-NL-PCA). The denoised image obtained from PRI-NL-PCA is again passed through grouping phase. Then 4D transform is applied on groups and Wiener filter is used as collaborative filter. After inverse 4D transform, all estimates are aggregated to get final denoised image. Figure 1 shows the block diagram of proposed algorithm including two stages.

3.1 PRI-NL-PCA stage

This stage consists of three steps: grouping, PCA decomposition,

thresholding. Firstly, the noisy image is pre-filtered using median filter

to get filtered image. By pre-filtering the noisy image prior grouping, we

get homogeneous group instead of directly grouping the similar patches.

As a result, sparser representations will enable better noise reduction.

Coupé has shown that simple 3D median filter greatly improves the

group selection process especially for medium and high noise level.

Median filter is used for pre-filtering because it can be applied at any

point of image irrespective of local noise level. Moreover, it is very

efficient for removing noise [12].

3.2 Non-local PCA denoising

After pre-filtering, groups of similar cubes are formed by sliding 3D

window. For each reference cube, the window slides within search

volume to find most similar cubes to the current reference cube. For each

Page 80: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

74

point x of the image, all possible similar cubes are arranged as a row

vector in matrix X. The similarity between two cubes is measured via

Euclidean distance which shows the sum of squared differences between

corresponding intensities of two cubes. We consider two cubes similar if

their distance is smaller than or equal to threshold Ʈ𝑚𝑎𝑡𝑐ℎ.Then similar

patches are reordered in a matrix X whose N rows correspond to number

of grouped patches and K columns correspond to number of voxels of

each 3D patch. Hence X is an N x K matrix.

PCA decomposition is then performed which uses lower dimensional

subspace and shows increased accuracy. PCA is a statistical approach

which transforms the correlated variables into uncorrelated principal

components. Due to this de-correlation property, PCA is used for image

denoising. Most of the image details are preserved in first few principal

components which are significant while noise is dominated in later

components. Therefore, noise can be removed easily. As a result of PCA

decomposition, we get eigenvectors and thresholding is applied to

remove less significant noise related components. The eigenvectors with

standard deviation less than threshold Ʈ𝑃𝐶𝐴 are set to zero. Then we

invert PCA decomposition and all estimates are combined to get

denoised image.

3.3 PCA based Noise Estimation

The threshold Ʈ𝑃𝐶𝐴 is set according to noise level present in image. The

correct noise level estimation is necessary to select signal related

components and for optimal performance. In order to obtain robust noise

level estimator there are two possible cases. If the set of cubes are

extracted from homogeneous area, then both mean and median of

eigenvalues are expected to be close to noise variance. In another case,

if cubes belong to textured areas or edges where signal related

components are significant, the mean of eigenvalues will overestimate

the variance of noise. This is because of contamination of noise variance

by signal variance. However, the median of eigenvalues still will be close

to noise variance. Hence, median of eigenvalues is used for noise

estimation instead of mean.

Page 81: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Denoising of 3D magnetic resonance images using non-local PCA and Transform-Domain Filter

75

The median of eigenvalues given in Eq. 1 is related to noise variance.

So, local noise standard deviation can be calculated from median of

eigenvalues λ as done in [11]

ơ = ß √𝑚𝑒𝑑𝑖𝑎𝑛(𝜆) (1)

where ß is correction factor that is ratio between number of selected

patches N and number of voxels of each patch K. Its value is set to 1.16

which is experimentally obtained for N=K.

This approach for noise estimation holds good for medium and high

noise. However, for low noise conditions in strong edges and textured

areas (low group sparseness), this method slightly overestimates the

noise level. This overestimation is because the variance of signal can be

no more neglected as compared to variance of noise. To reduce the effect

of signal in noise estimation, we use a subset of eigenvalues in which

first eigenvalues dominated by signal are removed. So, median

estimation is performed on this subset to estimate noise. In order to

implement this, the eigenvalues whose standard deviation is higher than

two times the standard deviation of median of full set of eigenvalues are

removed. This new subset is named as trimmed subset of eigenvalues

and noise standard deviation is estimated as square root of median of

trimmed subset with correction factor ß = 1.29 for new subset of

eigenvalues [12]. Equation 2 defines it as

ơ = ß√𝑚𝑒𝑑𝑖𝑎𝑛( 𝜆𝑡) ,𝜆𝑡 = {𝜆𝑖 ǀ √ 𝜆𝑖 < 2 𝑚𝑒𝑑𝑖𝑎𝑛(√ 𝜆 )} (2)

3.4 Wiener Filtering stage

Let suppose 𝑦𝑝𝑟𝑖 is the denoised image we get after first stage. This

image obtained from PRI-NL-PCA stage is passed to the second stage.

Again grouping of similar cubes is done but here the noise level of the

input image is low as compared to the noisy image. As a result, we

expect superior denoising performance because of high sparsification

Page 82: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

76

of groups. The cubes of voxels are stacked in 4D groups. Three

dimensions are reserved for cubes and fourth dimension shows the

direction in which cubes are stacked.

In Wiener filtering step four 1D de-correlation linear transform is

applied separately in each dimensions. At first, the group 𝑮𝒚𝒑𝒓𝒊 is

formed from the partly denoised image then Eq.3 defines coefficients

of empirical Wiener filter [11] as

W = | Ʈ𝟒𝑫

𝒘𝒊𝒆 ( 𝑮𝒚𝒑𝒓𝒊) |𝟐

|Ʈ𝟒𝑫𝒘𝒊𝒆 ( 𝑮𝒚𝒑𝒓𝒊

)|𝟐+ ơ𝟐 (3)

where Ʈ𝟒𝑫𝒘𝒊𝒆 is 4D transform operator which includes four 1D linear

transformation. Similarly, we define another group 𝑮𝒛 extracted from

noisy image z using same set of coordinates of similar cubes as used in

Wiener filtering stage. So, groups 𝑮𝒚𝒑𝒓𝒊and 𝑮𝒛 have same similar cubes

but one group has partly denoised cubes and other has noisy cubes.

Then spectrum of noisy group 𝑮 𝑧 is obtained by 4D transform operator

as Ʈ𝟒𝑫𝒘𝒊𝒆 (𝑮 𝑧 ). Finally, coefficients of Wiener filter and spectrum of

noisy group are multiplied element-by-element to implement

coefficient shrinkage as W. Ʈ𝟒𝑫𝒘𝒊𝒆 ( 𝑮𝒛). The shrunk spectrum is passed

through inverse 4D transform Ʈ𝟒𝑫𝒘𝒊𝒆−𝟏

to produce estimate of finally

denoised group 𝑮𝒚 [11]. Equation 4 defines this group as

𝑮𝒚 = Ʈ𝟒𝑫𝒘𝒊𝒆−𝟏

(W. Ʈ𝟒𝑫𝒘𝒊𝒆 ( 𝑮𝒛)) (4)

At the end, final estimate is obtained by aggregation of group estimate

𝑮𝒚.

4. EXPERIMENTAL RESULTS Our main objective is to compare the denoising performance of

proposed algorithm with state-of-art algorithms. So, we have

Page 83: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Denoising of 3D magnetic resonance images using non-local PCA and Transform-Domain Filter

77

implemented three existing denoising algorithms: BM4D [11], PRI-

NL-PCA [12] and Proposed method. The performance is measured

using PSNR. The parameters of proposed method are adjusted for

optimal performance. The four important parameters are cube size L,

search cube size 𝑁𝑠, step size 𝑁𝑠𝑡𝑒𝑝 the sliding step to process every

next reference cube, threshold Ʈ𝑚𝑎𝑡𝑐ℎ for grouping of similar

cubes and threshold Ʈ𝑃𝐶𝐴 applied to eigenvalues of PCA

decomposition. Table I shows the parameter settings for proposed

method.

Page 84: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

78

TABLE 1. PARAMETER SETTINGS FOR THE PROPOSEDALGORITHM

Noise

Filter

Standard deviation ơ

1% 3% 5% 7% 9% 11% 13% 15% 17%

Gauss

OB-NLM3D 42.47 37.57 34.73 32.82 31.42 30.32 29.40 28.61 27.91

OB-NLM3D-WM 42.52 37.75 35.01 33.13 31.73 30.61 29.68 28.88 28.18

ODCT3D 43.78 37.53 34.89 33.18 31.91 30.90 30.07 29.35 28.73

PRI-NLM3D 44.04 38.26 35.51 33.67 32.37 31.29 30.40 29.65 28.99

BM4D 44.77 38.71 36.30 34.76 33.63 32.72 31.96 31.30 30.73

PRI-NL-PCA 45.84 40.03 37.21 35.22 33.59 32.14 30.86 29.76 28.83

Proposed 45.37

39.54 37.05 35.45

34.25

33.28 32.46

31.75

31.11

Rician

+ VST

OB-NLM3D 42.48 37.45 34.40 32.26 30.65 29.34 28.23 27.25 26.37

OB-NLM3D-WM 42.53 37.68 34.75 32.66 31.06 29.77 28.68 27.71 26.84

ODCT3D 43.74 37.51 34.79 32.98 31.59 30.47 29.52 28.71 27.98

PRI-NLM3D 44.21 38.20 35.34 33.36 31.90 30.71 29.71 28.88 28.13

BM4D 44.68 38.43 35.86 34.14 32.70 31.40 30.18 29.04 27.96

PRI-NL-PCA 45.70 39.14 35.75 33.17 30.94 29.17 27.75 26.47 25.24

Proposed 45.17 39.07 36.22 34.30 32.77 31.31 30.17 29.14 28.20

Page 85: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Denoising of 3D magnetic resonance images using non-local PCA and Transform-Domain Filter

79

TABLE 2 DENOISING PERFORMANCE OF PROPOSED ALGORITHM

Parameter Set

values

Cube size L 4 x 4 x 4

Search cube

size

𝑁𝑠 7 x 7 x 7

Step size 𝑁𝑠𝑡𝑒𝑝 3

Similarity

threshold

Ʈ𝑚𝑎𝑡𝑐ℎ 0.1

PCA threshold Ʈ𝑝𝑐𝑎 2.1ơ

4.1 Simulation Setup

The simulation results for T1 weighted modality with different values

of noise standard deviation are presented in Table II. We have

compared the denoising performance of Proposed algorithm against

block matching 4D BM4D [10], pre-filtered non-local PCA PRI-

NL-PCA [11], the optimized block-wise non-local means OB-

NLM3D [4], optimized block-wise nonlocal means with wavelet

mixing OB-NLM3D-WM [5], the oracle-based 3-D DCT ODCT3D

[8], and the pre-filtered rotationally invariant nonlocal means PRI-

NLM3D [10]. Hence, the proposed algorithm achieves best denoising

performance for high noise level giving PSNR improvement of almost

1dB.

The MR images are available on Open Access Series of Imaging Studies

(OASIS) database [13]. The 3D volumes of images are 181 x 217 x 181

voxels with voxel resolution of 1𝑚𝑚3. Moreover, any noise level can be

introduced for experiments. The noise standard deviation ranging from

1% to 17% is added. The MRI images are corrupted with both Gaussian

and Rician noise. Gaussian noise is additive noise while Rician noise is

generated by adding real and imaginary parts of Gaussian noise and then

Page 86: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

80

taking its magnitude. Figure 2 shows the cross section of BrainWeb

phantom corrupted by Gaussian noise with ơ = 13% and the results

obtained by proposed algorithm. Horizontal, coronal and sagittal views

are shown below.

Figure. 2. Denoising results of Proposed Method

5. CONCLUSION

The paper has proposed a cascaded scheme: first, we have used NL-PCA

method which includes PCA decomposition and hard thresholding;

second, we have presented Wiener filtering stage which further refines

the denoised image. Our proposed method has been compared with

current state of art algorithms and shows improved performance in terms

of PSNR. Experimental results demonstrate that proposed algorithm not

only outperforms the other denoising algorithms but also significantly

attains better visual appearance.

6. ACKNOWLEDGMENT We are grateful to Dr. Alessandro Foi for helping us in

implementation of BM4D algorithm. We would also like to thank J. V. Manjón for providing PRI-NL-PCA algorithm.

Page 87: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Denoising of 3D magnetic resonance images using non-local PCA and Transform-Domain Filter

81

REFERENCE

[1] A. Buades, B. Coll, and J. M. Morel, “A Review of Image Denoising

Algorithms, with a New One,” Multiscale Modeling & Simulation

Multiscale Model. Simul., vol. 4, no. 2, pp. 490–530, 2005.

[2] J. Manjon, J. Carbonellcaballero, J. Lull, G. Garciamarti, L.

Martibonmati, and M. Robles, “MRI denoising using Non-Local

Means,” Medical Image Analysis, vol. 12, no. 4, pp. 514–523, 2008.

[3] P. Coupé, P. Yger, and C. Barillot, “Fast Non Local Means

Denoising for 3D MR Images,” Medical Image Computing and

Computer-Assisted Intervention – MICCAI 2006 Lecture Notes in

Computer Science, pp. 33–40, 2006.

[4] P. Coupe, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot,

“An Optimized Blockwise Nonlocal Means Denoising Filter for 3-D

Magnetic Resonance Images,” IEEE Transactions on Medical Imaging

IEEE Trans. Med. Imaging, vol. 27, no. 4, pp. 425–441, 2008.

[5] P. Coupé, P. Hellier, S. Prima, C. Kervrann, and C. Barillot, “3D

wavelet subbands mixing for image denoising,” J. Biomed. Imag., vol.

2008, pp.1–11, Jan. 2008.

[6] Y. Gal, A. Mehnert, A. Bradley, K. Mcmahon, D. Kennedy, and S.

Crozier, “Denoising of Dynamic Contrast-Enhanced MR Images Using

Dynamic Nonlocal Means,” IEEE Transactions on Medical Imaging

IEEE Trans. Med. Imaging, vol. 29, no. 2, pp. 302–310, 2010.

[7] H. Liu, C. Yang, N. Pan, E. Song, and R. Green, “Denoising 3D MR

images by the enhanced non-local means filter for Rician noise,”

Magnetic Resonance Imaging, vol. 28, no. 10, pp. 1485–1496, 2010.

[8] J. Hu, Y. Pu, X. Wu, Y. Zhang, and J. Zhou, “Improved DCT-Based

Nonlocal Means Filter for MR Images Denoising,” Computational and

Mathematical Methods in Medicine, vol. 2012, pp. 1–14, 2012.

[9] J. V. Manjón, P. Coupé, A. Buades, D. L. Collins, and M. Robles,

“New methods for MRI denoising based on sparseness and self-

similarity,” Medical Image Analysis, vol. 16, no. 1, pp. 18–27, 2012.

[10] M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, “Nonlocal

Transform-Domain Filter for Volumetric Data Denoising and

Reconstruction,” IEEE Transactions on Image Processing IEEE Trans.

on Image Process., vol. 22, no. 1, pp. 119–133, 2013.

Page 88: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Laraib Kanwal, & Muhammad Usman Shahid

82

[11] J. V. Manjón, P. Coupé, and A. Buades, “MRI noise estimation and

denoising using non-local PCA,” Medical Image Analysis, vol. 22, no.

1, pp. 35–47, 2015.

[12] P. Coupé, M. Munz, J. V. Manjón, E. S. Ruthazer, and D. L. Collins,

“A CANDLE for a deeper in vivo insight,” Medical Image Analysis, vol.

16, no. 4, pp. 849–864, 2012.

[13] D. S. Marcus, T. H. Wang, J. Parker, J. G. Csernansky, J. C. Morris,

and R. L. Buckner, “Open access series of imaging studies (OASIS):

Crosssectional MRI data in young, middle aged, nondemented, and

demented older adults,” J. Cognit. Neurosci., vol. 22, no. 12, pp. 2677–

2684, 2010.

Page 89: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce
Page 90: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

Editorial Policy and Guidelines for Authors

LGURJCSIT is an open access, peer reviewed quarterly Journal

published by LGU Society of Computer Sciences. The Journal publishes

original research articles and high quality review papers covering all

aspects of Computer Science and Technology.

The following note set out some general editorial principles. A more

detailed style document can be download at www.research.lgu.edu.pk is

available. All queries regarding publications should be addressed to

editor at email [email protected]. The document must be in word

format, other format like pdf or any other shall not be accepted.

The format of paper should be as follows:

Title of the study (center aligned, font size 14)

Full name of author(s) (center aligned, font size 10)

Name of Department

Name of Institution

Corresponding author email address.

Abstract

Keywords

Introduction

Literature Review

Theoretical Model/Framework and Methodology

Data analysis/Implementation/Simulation

Results/ Discussion and Conclusion

References.

Heading and sub-heading should be differentiated by numbering

sequences like, 1. HEADING (Bold, Capitals) 1.1 Subheading (Italic,

bold) etc. The article must be typed in Times New Roman with 12 font

size 1.5 space, and should have margin 1 inches on the left and right.

Length of paper should not be longer than 15 pages, including figures,

tables, exhibits and bibliography. Table must have standard caption at

the top while figures below with. Figure and table should be in continues

numbering. Citation must be in according to the IEEE 2006 style.

Page 91: LGURJCSIT ISSN:2519-7991 · Prof Dr. Aftab Ahmad Malik 1 Algorithm for Coding Person’s Names in large Databases / Data Warehouses to Enhance Processing Speed, Efficiency and Reduce

LAHORE GARRISON UNIVERSITY

Lahore Garrison University has been established to achieve the goal of

excellence and quality education in minimum possible time. Lahore

Garrison University in the Punjab metropolis city of Lahore is an

important milestone in the history of higher education in Pakistan. In

order to meet the global challenges, it is necessary to touch the highest

literacy rates while producing skillful and productive graduates in all

fields of knowledge.

LGU vision is to prepare a generation that can take the lead and put this

nation on the path to progress and prosperity through applying their

knowledge, skills and dedication. We are committed to help individuals

and organizations in discovering their God-gifted potentials to achieve

ultimate success actualizing the highest standards of efficiency,

effectiveness, excellence, equity, trusteeship and sustainable

development of global human society.

The mission of LGU is running Undergraduate, Graduate, Masters,

M.Phil. and Ph.D. programs in various disciplines. Our mission is to

serve the society by equipping the upcoming generations with valuable

knowledge and latest professional skills through education and research.

We also aim to evolve new realities and foresight by unfolding new

possibilities. We intend to promote the ethical, cultural and human

values in our participants to make them educated and civilized members

of society.

Contact: For all inquiries, regarding call for papers, submission of research articles and correspondence,

kindly contact at this address:

Sector C, DHA Phase-VI Lahore, Pakistan

Phone: +92- 042-37181823

Email: [email protected]

Copyright @ 2017, Lahore Garrison University, Lahore, Pakistan. All rights

reserved.

Published by: Faculty of Computer Science Lahore Garrison University Lahore, Pakistan