multiple face detection counting and recognition using kernel prototype similarities

66
MULTIPLE FACE DETECTION COUNTING AND RECOGNITION USING KERNEL PROTOTYPE SIMILARITIES A PROJECT REPORT PHASE I / PIT Submitted by SUBASHRI R (1339003) In partial fulfillment for the award of the degree of MASTER OF TECHNOLOGY in INFORMATION TECHNOLOGY

Upload: anbu

Post on 10-Nov-2015

224 views

Category:

Documents


5 download

DESCRIPTION

lfds

TRANSCRIPT

MULTIPLE FACE DETECTION COUNTING AND RECOGNITION USING KERNEL PROTOTYPE SIMILARITIES

A PROJECT REPORT PHASE I / PITSubmitted bySUBASHRI R (1339003)

In partial fulfillment for the award of the degreeof MASTER OF TECHNOLOGY in INFORMATION TECHNOLOGY

DEPARTMENT OF INFORMATION TECHNOLOGYHINDUSTAN INSTITUTE OF TECHNOLOGY AND SCIENCEPADUR, CHENNAI 603 103 October 2014

HINDUSTAN UNIVERSITY: PADUR, CHENNAI - 603 103

BONAFIDE CERTIFICATE

Certified that this Phase I project report titled MULTIPLE FACE DETECTION COUNTING AND RECOGNITION USING KERNEL PROTOTYPE SIMILARITIES is the bonafide work of SUBASHRI R. (1339003:) who carried out the project work under my supervision. Certified further that to the best of my knowledge the work reported here does not form part of any other project / research work on the basis of which a degree or award was conferred on an earlier occasion on this or any other candidate. HEAD OF THE DEPARTMENT SUPERVISOR Dr.Komathy vimalraj, Dr.S.Nagarajan,PhDProfessor & HOD, Professor,Department of IT, Department of IT Hindustan University, Hindustan UniversityPadur Padur.

The Project Phase I Viva-Voce Examination is held on _______________

INTERNAL EXAMINEREXTERNAL EXAMINER

ABSTRACT

We propose two novel local transform features: local gradient patterns (LGP) and binary histograms of oriented gradients (BHOG). LGP assigns one if the neighboring gradient of a given pixel is greater than its average of eight neighboring gradients and zero otherwise, which makes the local intensity variations along the edge components robust. BHOG assigns one if the histogram bin has a higher value than the average value of the total histogram bins, and zero otherwise, which makes the computation time fast due to no further post-processing and SVM classification. Automatic people counting are there but this is not accurate. Heterogeneous face recognition (HFR) involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph. Charged with the task of outputting a measure of similarity between a given pair of face images, such challenges manifest performed by most face recognition is face detection, face counting and face recognition. Heterogeneous face recognition is proposed in four HFR scenarios. Detection and identification of human faces have been largely addressed mainly focussing on 2D still images. To represent face images using given databases or from camera option. The matching of image can be done using a Kernel Prototype.Our experimental results indicate that the proposed LGP and BHOG feature attain accurate detection performance and fast computation time, respectively, and the hybrid feature improves face and human detection performance considerably and automatic people counting using trellis optimization algorithm . The merits of the proposed approach, called prototype random subspace (P-RS), are demonstrated on four different heterogeneous scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to photograph, and 4) forensic sketch to photograph.

ACKNOWLEDGEMENT

The authors gratitude is due to Dr. K. Sarukesi, Vice Chancellor, Dr. Roy Chowdhury, Dean (Research), Dr. Elizabeth Verghese, Chancellor, Dr. Anand Jacob Verghese, Pro Chancellor, Mr. Ashok Verghese and Dr. Aby Sam, Directors of Hindustan University for their support and encouragement.The author wishes to express his sincere thanks and gratitude to Dr.KOMATHYVIMALRAJ, HOD and Prof. S.NAGARAJAN, supervisor for the technical support, guidance and suggestions during review meetings.The author expresses her thanks to her husband, U.D Saravanan, son, S.Sai Shashwat shakthi, daughter, Ms.Subha Jawahar, daughter-in-law, Ms.Rohini Sathyan, and son-in-law, Mr.R.Jawahar and other family members for their moral support and motivation for successful completion of this research work.SUBHASRI R

TABLE OF CONTENTS

CHAPTER NO. TITLE PAGE NO. ABSTRACT iiiLIST OF FIGURESxviiiLIST OF SYMBOLS AND ABBREVIATIONSxxiv

1 INTRODUCTION 11.1SYNOPSIS 12 LITERATURE SURVEY 143 SYSTEM ANALYSIS 3.1EXISTING SYSTEM 113.1.1Existing System Disadvantage 113.2PROPOSED SYSTEM 113.1.2Proposed System Advantage 113.3FEASIBILITY SYSTEM4REQUIREMENT SPECIFICATION144.1SYSTEM REQUIREMENT SPECIFICATION4.2SYSTEM REQUIREMENT4.2.1Hardware Requirements4.2.2 Software Requirements4.3SYSTEM DESIGN AND DEVELOPMENT

5SOFTWARE DESCRIPTION145.1FEATURES OF DOTNET 5.2THE DOTNET FRAMEWORK5.3LANGUAGES SUPPORT BY DOTNET5.4OBJECTIVES OF DOTNET FRAMEWORK5.5FEATURES OF SQL-SERVER

6SOFTWARE DESCRIPTION 146.1 ARCHITECTURE DIAGRAM6.2DATAFLOW DIAGRAM6.3USE CASE DIAGRAM6.4SEQUENCE DIAGRAM6.5COLLABORATION DIAGRAM6.6CLASS DIAGRAM

7TECHNIQUES AND ALGORITHM 148MODULES 149TESTING AND IMPLEMENTATION9.1TESTING9.2TYPES OF TESTING9.3TESTING USING IN THIS PROJECT10CONCLUSION 1411REFERANCE 14

LIST OF SYMBOLS AND ABBREVIATIONS BHOG-Binary Histogram of Oriented Gradients

CLR-Common Language Runtime

FERET-Facial Recognition Technology

FRVT-Face Recognition Vendor Test

IL-Intermediate Language

LBP-Local Binary Patterns

LGP-Local Gradient Pattern

CHAPTER-1INTRODUCTION

1.1 SYNOPSIS Aim of the project is multiple human face detection, counting and recognition using LGP, BHOG and Trellis optimization algorithm. Recognition of Human actions in established all the way through .NET Framework . FACE and human detection, counting and recognition is an important topic in the field of computer vision. It has been widely used for practical and real-time applications in many areas such as digital media (cell phone, smart phone, digital camera), intelligent user interfaces, intelligent visual surveillance, and interactive games. Conventional face and human detection methods usually take the pixel color (or intensity) directly as information cue. The challenges in designing automated face recognition algorithms are numerous.

CHAPTER-2LITERATURE SURVEY

Why Use the Face for Recognition

Biometric-based techniques have emerged as the most promising option for recognizing individuals in recent years since, instead of authenticating people and granting them access to physical and virtual domains based on passwords, PINs, smart cards, plastic cards, tokens, keys and so forth, these methods examine an individuals physiological and/or behavioral characteristics in order to determine and/or ascertain his identity. Passwords and PINs are hard to remember and can be stolen or guessed; cards, tokens, keys and the like can be misplaced, forgotten, purloined or duplicated; magnetic cards can become corrupted and unreadable. However, an individuals biological traits cannot be misplaced, forgotten, stolen or forged. Biometric-based technologies include identification based on physiological characteristics (such as face, fingerprints, finger geometry, hand geometry, hand veins, palm, iris, retina, ear and voice) and behavioral traits (such as gait, signature and keystroke dynamics) [1]. Face recognition appears to offer several advantages over other biometric methods, a few of which are outlined here: Almost all these technologies require some voluntary action by the user, i.e., the user needs to place his hand on a hand-rest for fingerprinting or hand geometry detection and has to stand in a fixed position in front of a camera for iris or retina identification.

The Random Subspace Method for Constructing Decision Forests

Much of previous attention on decision trees focuses on the splitting criteria and optimization of tree sizes. The dilemma between overfitting and achieving maximum accuracy is seldom resolved. A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity. The classifier consists of multiple trees constructed systematically by pseudorandomly selecting subsets of components of the feature vector, that is, trees constructed in randomly chosen subspaces. The subspace method is compared to single-tree classifiers andother forest construction methods by experiments on publicly available datasets, where the methods superiority is demonstrated. We also discuss independence between trees in a forest and relate that to the combined classification accuracy.

Multi resolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns, IEEE Trans. Pattern Analysis and Machine Intelligence, This paper presents a theoretically very simple, yet efficient, multi resolution approach to gray- scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns

Face Description with Local Binary Patterns: Application to Face Recognition The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor.

Neural Network-Based Face DetectionWe present a neural network-based upright frontal face detection system. A retinally connected neural network examines small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present a straightforward procedure for aligning positive face examples for training. To collect negative examples, we use a bootstrap algorithm, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting nonface training examples, which must be chosen to span the entire space of nonface images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further improve the accuracy. Comparisons with several other state-of-the-art face detection systems are presented, showing that our system has comparable performance in terms of detection and false-positive rates.

Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions Difficult light condition is important for face detection and recognition this paper to solve this one.

Filtering for Texture Classification: A Comparative Study

In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks, ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, andoptimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of thefiltering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments.

Exponential Local Discriminant Embedding and Its Application to Face Recognition

Local discriminant embedding (LDE) has been recently proposed to overcome some limitations of the global linear discriminant analysis method. In the case of a small training data set, however, LDE cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size (SSS) problem.The classical solution to this problem was applying dimensionality reduction on the raw data (e.g., using principal component analysis). In this paper, we introduce a novel discriminant technique called exponential LDE (ELDE). The proposed ELDE can be seen as an extension of LDE framework in two directions. First, the proposed framework overcomes the SSS problem without discarding the discriminant information that was contained in the null space of the locality preserving scatter matrices associated with LDE. Second, the proposed ELDE is equivalent to transforming original data into a new space by distance diffusion mapping (similar to kernel-based nonlinear mapping), and then, LDE is applied in such a new space. As a result of diffusion mapping, the margin between samples belonging to different classes is enlarged, which is helpful in improving classification accuracy. The experiments are conducted on five public face databases: Yale, Extended Yale, PF01, Pose, Illumination, and Expression (PIE), and Facial Recognition Technology (FERET). The results show that the performances of the proposed ELDE are better than those of LDE andmany state-of-the-art discriminant analysis techniques.

Robust Kernel Representation With Statistical Local Features for Face Recognition Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in faceimages. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.

Face Recognition: A Literature Review

The task of face recognition has been actively researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented. Description and limitations of face databases which are used to test the performance of these face recognition algorithms are given. A brief summary of the face recognition vendor test (FRVT) 2002, a large scale evaluation of automatic face recognition technology, and its conclusions are also given. Finally, we give a summary of theresearch results.

A Survey of Face Recognition TechniquesFace recognition presents a challenging problem in the field of image analysis andcomputer vision, and as such has received a great deal of attention over the last few years because of its many applications in various domains. Face recognition techniques can be broadly divided into three categories based on the face data acquisition methodology: methods that operate on intensity images; those that deal with video sequences; and those that require other sensory data such as 3D information or infra-red imagery. In this paper, an overview of some of the well-known methods in each of these categories is provided and some of the benefits and drawbacks of the schemes mentioned therein are examined. Furthermore, a discussion outlining the incentive for using face recognition, the applications of this technology, and some of the difficulties plaguing current systems with regard to this task has also been provided. This paper also mentions some of the most recent algorithms developed for this purpose and attempts to give an idea of the state of the art of face recognition technology.

CHAPTER-3SYSTEM ANALYSIS

3.1 EXISTING SYSTEM There is no need to provide gradient pattern. Less Number of Faces detected . Heterogeneous face recognition involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph . where the gallery databases are populated with photographs but the probe images are often limited to some alternate modality. Face counting is not accurate

3.1.1 EXISTING SYSTEM DISADVANTAGE Multiple faces and gradient pattern is not detected correctly. Face counting is not accurate Only one filters to be used for image

3.2 PROPOSED SYSTEM Multiple face detected & recognized using prototype random subspaces, LGP and BHOG From video sequences, recognition is performed by generating tree-based prototypes and look-up table indexing. Accurate face counting is added. Recognizing face in video mode as well as live camera mode by making use of AForge.net framework .

3.2.1 PROPOSED SYSTEM ADVANTAGES Multiple faces are detected Face counting is accurate Four filters to be used on image

3.3 FEASIBILITY STUDY All projects are feasible given unlimited resources and infinite time. It is both necessary and prudent to evaluate the feasibility of the project at the earliest possible time. Feasibility and risk analysis is related in many ways. If project risk is great, the feasibility listed below is equally important. The following feasibility techniques has been used in this project Operational Feasibility Technical Feasibility Economic FeasibilityOperational Feasibility: Proposed system is beneficial since it turned into information system analyzing the face detection and counting that will meet the organizations operating requirements.In security, multiple face detection and counting is used.Accurate face detection and counting without missing any values.

Technical Feasibility: Technical feasibility centers on the existing computer system (hardware , software, etc..) and to what extent it can support the proposed addition. For example, if the current computer is operating at 80% capacity. This involves, additional hardware (RAM and PROCESSOR) will increase the s software and normal hardware configuration is enough , so the system is more feasible on this criteria.

Economic Feasibility: Economic feasibility is the most frequently used method for evaluating the effectiveness of a candidate system. More commonly known as cost / benefit analysis, the procedure is to determine the benefits and saving that are expected from a candidate and compare them with the costs. If the benefits outweigh cost. Then the decision is made to design and implement the system. Otherwise drop the system. such that it can be used to analysis the detection and counting. So it does not requires any extra equipment or hardware to implement. So it is economically feasible to use.

CHAPTER-4REQUIREMENT SPECIFICATION

4.1 SYSTEM REQUIREMENT SPECIFICATION The software requirements specification is produced at the culmination of the analysis task. The function and performance allocated to software as part of system engineering are refined by establishing a complete information description as functional representation of system behavior, an indication of performance requirements and design constraints, appropriate validation criteria.

4.2 SYSTEM REQUIREMENTS

4.2.1 HardwareRequirements: Processor -PentiumIVSpeed -1.8GHzRAM -512MBHard Disk -80GB

4.2.2 SoftwareRequirements:

Language - C#(Visual Studio 2010) Operating system - Windows XP DataBase - SQL Sever 2005

4.3. SYSTEM DESIGN AND DEVELOPMENT4.3.1 DESCRIPTIONUnified Modeling Language (UML)UML is a method for describing the system architecture in detail using the blueprint. UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. Using the UML helps project teams communicate, explore potential designs, and validate the architectural design of the software

Definition: UML is a general-purpose visual modeling language that is used to specify, visualize, construct, and document the artifacts of the software system.

UML is a language: It will provide vocabulary and rules for communications and function on conceptual and physical representation. So it is modeling language.

UML Specifying: Specifying means building models that are precise, unambiguous and complete. In particular, the UML address the specification of all the important analysis, design and implementation decisions that must be made in developing and displaying a software intensive system.

UML Visualization: The UML includes both graphical and textual representation. It makes easy to visualize the system and for better understanding.

UML Constructing: UML models can be directly connected to a variety of programming languages and it is sufficiently expressive and free from any ambiguity to permit the direct execution of models.

UML Documenting: UML provides variety of documents in addition raw executable codes.

Goal of UML:The primary goals in the design of the UML were: Provide users with a ready-to-use, expressive visual modeling language so they can develop and exchange meaningful models. Provide extensibility and specialization mechanisms to extend the core concepts. Be independent of particular programming languages and development processes. Provide a formal basis for understanding the modeling language. Encourage the growth of the OO tools market. Support higher-level development concepts such as collaborations, frameworks, patterns and components. Integrate best practices.Uses of UMLThe UML is intended primarily for software intensive systems. It has been used effectively for such domain as Enterprise Information System Banking and Financial Services Telecommunications Transportation Defense/Aerospace Retails Medical Electronics Scientific Fields Distributed WebRules of UMLThe UML has semantic rules for NAMES: It will call things, relationships and diagrams. SCOPE: The content that gives specific meaning to a name. VISIBILITY: How those names can be seen and used by others. INTEGRITY: How things properly and consistently relate to another. EXECUTION: What it means is to run or simulate a dynamic model.

Building blocks of UMLThe vocabulary of the UML encompasses 3 kinds of building blocks1. Things2. Relationships3. Diagrams

Things: Things are the data abstractions that are first class citizens in a model. Things are of 4 types Structural Things Behavioral Things Grouping Things An notational Things

Relationships: Relationships tie the things together. Relationships in the UML are Dependency Association Generalization Specialization

CHAPTER-5SOFTWARE DISCRIPTION

5. SOFTWARE DESCRIPTION 5.1. Features of Dot net 5.2. The Dot Net framework5.3. Languages supported by Dot Net5.4. Objectives of Dot Net Framework5.5. Features of Dot Net

5.1. FEATURES OF DOTNETMicrosoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. Theres no language barrier with .NET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate. .NET is also the collective name given to various software components built upon the .NET platform. These will be both products (Visual Studio.NET and Windows.NET Server, for instance) and services (like Passport, .NET My Services, and so on).

5.2. THE .NET FRAMEWORK

The .NET Framework has two main parts:1. The Common Language Runtime (CLR).2. A hierarchical set of class libraries.The CLR is described as the execution engine of .NET. It provides the environment within which programs run. The most important features are Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on. Memory management, notably including garbage collection. Checking and enforcing security restrictions on the running code. Loading and executing programs, with version control and other such features. The following features of the .NET framework are also worth description:

Managed Code The code that targets .NET, and which contains certain extra Information - metadata - to describe itself. Whilst both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

Managed Data With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some .NET languages use Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not. Targeting CLR can, depending on the language youre using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in .NET applications - data that doesnt get garbage collected but instead is looked after by unmanaged code.

Common Type System The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesnt attempt to access memory that hasnt been allocated to it.

Common Language Specification The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity.The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

5.3. LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables developers to use their existing programming skills to build all types of applications and XML Web services. The .NET framework supports new versions of Microsofts old favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number of new additions to the family. Visual Basic .NET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic .NET. Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new .NET Framework.

C# is Microsofts new language. Its a C-style language that is essentially C++ for Rapid Application Development. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages. Active State has created Visual Perl and Visual Python, which enable .NET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio .NET environment. Visual Perl includes support for Active States Perl Dev Kit.Other languages for which .NET compilers are available include

FORTRAN COBOL Eiffel

Fig1 .Net Framework ASP.NET XML WEB SERVICES Windows Forms

Base Class Libraries

Common Language Runtime

Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the .NET Framework; it manages the execution of the code and also makes the development process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created in C#.NET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#.NET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#.NET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTIONGarbage Collection is another new feature in C#.NET. The .NET Framework monitors allocated resources, such as objects and variables. In addition, the .NET Framework automatically releases memory for reuse by destroying objects that are no longer in use. In C#.NET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

OVERLOADINGOverloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING C#.NET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLINGC#.NET supports structured handling, which enables us to detect and remove errors at runtime. In C#.NET, we need to use TryCatchFinally statements to create exception handlers. Using TryCatchFinally statements, we can create robust and effective exception handlers to improve the performance of our application.

THE .NET FRAMEWORK The .NET Framework is a new computing platform that simplifies application development in the highly distributed environment of the Internet.

5.4. OBJECTIVES OF. NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely.2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code.3. Eliminates the performance problems. 4.There are different types of application, such as Windows-based applications and Web-based applications.

5.5. FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data ServicesSQL-SERVER database consist of six type of objects, They are,1. TABLE2. QUERY3. FORM4. REPORT5. MACRO

TABLE: A database is a collection of data about a specific topic.

VIEWS OF TABLE We can work with a table in two types,1. Design View2. Datasheet View

Design View To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

Datasheet View To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY: A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

CHAPTER-6SYSTEM DESIGN

6.1 ARCHITECTURE DIAGRAM

Fig 2:Architecture Diagram

6.2 DATAFLOW DIAGRAM

Fig 3: DataFlow Diagram

6.3 USECASE DIAGRAMUse Case shows the systems use cases as icons, and relationships to other use cases and the actors of the systems. In the Unified Modeling Language a use case diagram is a class of behavior diagram. A Use case is a set of scenarios that describing an interaction between a user and a system. A Use case diagram displays the relationship among actors.

Fig 4: Use case Diagram

6.4 SEQUENCE DIAGRAM

Sequence diagram are an easy and intuitive way of describing the behavior of a system by viewing the interaction between the system and its environment. A sequence diagram shows an interaction arranged in time sequence

Fig 5: Sequence Diagram

6.5COLLABORATIVE DIAGRAMAchieve Collaboration diagram represents a collaboration, which is a set of object related in a particular context and interaction which is a set of messages exchanged among the object within the collaboration to a desired outcome.

Fig 6: Collaboration Diagram

6.7.CLASS DIAGRAM Class diagram in the Unified Modeling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system classes, their attributes, and the relationship between the classes

Fig 7: Class Diagram

CHAPTER-7TECHNIQUES AND ALGORITHM

Techniques & Algorithm: Local Binary Pattern Local Gradient Pattern Binary histograms of oriented gradients Trellis Optimization Algorithm Adaboost Algorithm Support Vector Machine Heterogeneous Face RecognitionBinary Histogram of oriented gradients BHOG are feature descriptors used in computer vision and image processing for the purpose of object detection. The technique counts occurrences of gradient orientation in localized portions of an image. This method is similar to that of edge orientation histograms, and shape contexts and shape contexts, but differs in that it is computed on a dense grid of uniformly spaced cells and uses overlapping local contrast normalization for improved accuracy.

Support Vector Machine:

The final step in object recognition using Histogram of Oriented Gradient descriptors is to feed the descriptors into some recognition system based on supervised learning. The Support Vector Machine classifier is a binary classifier which looks for an optimal hyper plane as a decision function. Once trained on images containing some particular object, the SVM classifier can make decisions regarding the presence of an object, such as a human being, in additional test images.

Ada-boost Algorithm: This algorithm is used for detect the faces in quickly.This is same for the support vector machine process.

Local Gradient Pattern:

LGP is used for one of the face detection techniques. In which each bit of the LGP is assigned the value one if the neighboring gradient of a given pixel is greater than the average of eight neighboring gradients, and 0 otherwise. LGP representation is insensitive to global intensity variations like the other representations such as local binary patterns (LBP), and to local intensity variations along the edge components. Its always reducing the false positive edge detection.

Local Binary Pattern: LBP is a simple yet very efficient texture operator which labels the pixels of an image by thresholding the neighborhood of each pixel and considers the result as a binary number. Due to its discriminative power and computational simplicity, LBP texture operator has become a popular approach in various applications. It can be seen as a unifying approach to the traditionally divergent statistical and structural models of texture analysis.

Trellis Optimization algorithm: A trellis optimization algorithm is used for sequence estimation, based on multiple Texel camera measurements. Since the number of states in the trellis exponentially grows with the number of persons currently on the camera locations.

Heterogeneous Face Recognition:Prototype random subspaces is used for heterogeneous face recognition PRs is used for matching similarities for image.There are 4 types of filters are used1.Near infrared2.Thermal to infrared3.viewed sketch4.forensic sketch

CHAPTER-8MODULESModules: Credential Creation Color code authentication Face Detection, counting & Recognition Image Filter Image Comparison

Credential Creation: Custom authentication schemes should set theAuthenticatedproperty totrueto indicate that a user has been authenticated. When a user submits his or her login information, theLogincontrol first raises the Logging Inevent, then the authenticateevent, and finally theLogged Inevent.

Color code authentication: Its for alternate authentication to verify the user. In this type of authentication the user have to solve the puzzle followed by matrix method.

Face Detection, counting & Recognition:

This module denotes normal human face detection, counting & recognition in video as well as in live streaming using lbp, lgp and trellis optimization Algorithm.

Image FilterFour types of filters are used1. near infrared 2. Thermal to infrared3. Viewed sketch4. Forensic sketchNear Infrared: The use ofnearinfrared(NIR) imaging brings a new dimension forfacedetectionand recognition presented an NIR-basedfacedetectionmethod.

Thermal to infrared Forface recognitionhas been accurate identification under variableillumination conditions.

Forensic Sketch: Forensictechnique that has been rou- tinely used in criminal investigations.

Viewed Sketch: Face recognitionalgorithm is a novel way of helping criminal searches by accurately matching the features of the picture from theviewed sketch.

Image Comparison: It will compare the frames to find the exact human and his action

CHAPTER-9TESTING AND IMPLEMENTATION9.1 TESTING: Testing is a process of executing a program with a intent of finding an error. Testing presents an interesting anomaly for the software engineering. The goal of the software testing is to convince system developer and customers that the software is good enough for operational use. Testing is a process intended to build confidence in the software. Testing is a set of activities that can be planned in advance And conducted systematically. Software testing is often referred to as verification & validation.

9.2 TYPE OF TESTING: The various types of testing are White Box Testing Black Box Testing Alpha Testing Beta Testing Win Runner And Load Runner

WHITE BOX TESTING: It is also called as glass-box testing. It is a test case design method that uses the control structure of the procedural design to derive test cases. Using white box testing methods, the software engineer can derive test cases that 1. Guarantee that all independent parts within a module have been exercised at only once. 2. Exercise all logical decisions on their true and false sides.

BLACK BOX TESTING: Its also called as behavioral testing. It focuses on the functional requirements of the software. It is complementary approach that is likely to uncover a different class of errors than white box errors. A black box testing enables a software engineering to derive sets of input conditions that will fully exercise all functional requirements for a program.

ALPHA TESTING: Alpha testing is the software prototype stage when the software is first able to run. It will not have all the intended functionality, but it will have core functions and will be able to accept inputs and generate outputs. An alpha test usually takes place in the developer's offices on a separate system.

BETA TESTING:The beta test is a live application of the software in an environment that cannot be controlled by the developer. The beta test is conducted at one or more customer sites by the end user of the software.

WIN RUNNER & LOAD RUNNER: We use Win Runner as a load testing tool operating at the GUI layer as it allows us to record and playback user actions from a vast variety of user applications as if a real user had manually executed those actions.

LOAD RUNNER TESTING:With Load Runner, you can obtain an accurate picture of end-to-end system performance. Verify that new or upgraded applications meet specified performance requirements.

9.3 TESTING USED IN THIS PROJECT:

SYSTEM TESTING: Testing of the debugging programs is one of the most critical aspects of the computer programming triggers, without programs that works, the system would never produce the output for which it was designed. Testing is best performed when user development are asked to assist in identifying all errors and bugs. The sample data are used for testing. It is not quantity but quality of the data used the matters of testing. Testing is aimed at ensuring that the system was accurately an efficiently before live operation commands.

UNIT TESTING: In this testing we test each module individually and integrate with the overall system. Unit testing focuses verification efforts on the smallest unit of software design in the module. This is also known as module testing. The module of the system is tested separately. This testing is carried out during programming stage itself. In this testing step each module is found to working satisfactorily as regard to the expected output from the module. There are some validation checks for fields also. It is very easy to find error debut in the system.

VALIDATION TESTING: At the culmination of the black box testing, software is completely assembled as a package, interfacing error have been uncovered and corrected and a final series of software tests. That is, validation tests begin, validation testing can be defined many ways but a simple definition is that validation succeeds when the software functions in manner that can be reasonably expected be the customer. After validation tests has been conducted one of the two possible conditions exists

CHAPTER-10CONCLUSION

We conclude that the proposed local transform features and its hybrid feature are very effective for face detection in terms of performance and operating speed using lbp, lgp and bhog. Trellis Optimization method is used to estimate count of the face.The proposed method leads to excellent matching accuracies across four different HFR scenarios (near infrared, thermal infrared, viewed sketch, and forensic sketch). Results were compared against a leading commercial face recognition engine

CHAPTER-11REFERENCES

1. A.K.Jain,B. Klare, and U. Park. Face matching and retrieval Applications in forensics. IEEE Multimedia, 19(1):2028, 2012. 2. Wang, X., and X. Tang. Dual-Space Linear Discriminant Analysis for Face Recognition. Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on 2: 564-569. 3. Chen, X., P. J. Flynn, and K. W. Bowyer. IR and Visible Light Face Recognition.Computer Vision and Image Understanding 99, no. 3 (2005): 332-358.4. Lu, J., KN Plataniotis, and AN Venetsanopoulos. Face Recognition using Kernel Direct Discriminant Analysis Algorithms. Neural Networks, IEEE Transactions on 14, no. 1 (2003): 117-126.5. Pereira, D. Face Recognition using Uncooled Infrared Imaging, Electrical Engineer Thesis, Naval Postgrduate School, Monterey, CA (2002).6. Lee, C. K. Infrared Face Recognition, MSEE Thesis, Naval Postgrduate School, Monterey, CA (2004).