black book

76
Hand Gesture Recognition Shah & Anchor Kutchhi Polytechnic 1 ACKNOWLEGDEMENT A project is a creative work of many minds. A proper Synchronization between individuals is a must for any project to be Successfully completed. We owe deep gratitude to our guide Mr. Sutte Muneshkumar . He Rendered us valuable guidance with a touch of inspiration and motivation. He guided us through quite a lot substantial hurdle by giving plenty of early ideas and which finally resulted in the present fine work. Our foremost thanks to Prof. Bhavesh Patel, Head of Computer Technology Department, who had provided every facility to us for making and completing this project smoothly. Finally we also would like to express our thanks to all those helped us directly or indirectly in successful completion of this project. NEEL GALA KAILASH BHANUSHALI HANISH SAVLA

Upload: himanshu-thakkar

Post on 19-Nov-2015

139 views

Category:

Documents


9 download

DESCRIPTION

Complete Documentary on Hands Gesture Recognition System Developed in C#

TRANSCRIPT

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 1

    ACKNOWLEGDEMENT

    A project is a creative work of many minds. A properSynchronization between individuals is a must for any project to beSuccessfully completed.

    We owe deep gratitude to our guide Mr. Sutte Muneshkumar . HeRendered us valuable guidance with a touch of inspiration andmotivation. He guided us through quite a lot substantial hurdle bygiving plenty of early ideas and which finally resulted in the present finework.

    Our foremost thanks to Prof. Bhavesh Patel, Head of ComputerTechnology Department, who had provided every facility to us formaking and completing this project smoothly.

    Finally we also would like to express our thanks to all those helpedus directly or indirectly in successful completion of this project.

    NEEL GALAKAILASH BHANUSHALI

    HANISH SAVLA

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 2

    PREFACE

    Gesture recognition can be seen as a way for computers to begin to

    understand human body language, thus building a richer bridge between machines

    and humans than primitive text user interfaces or even GUIs (graphical user

    interfaces), which still limit the majority of input to keyboard and mouse. Gesture

    recognition enables humans to interface with the machine (HMI) and interact

    naturally without any mechanical devices.

    Gesture recognition can be conducted with techniques from computer vision

    and image processing.

    -Hand Gesture Recognition Movement

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 3

    Index

    CHAPTER 1: INTRODUCTION

    1.1 INTRODUCTION..6

    1.2 PREVIOUS RELATED WORK..14

    1.3 PROBLEM STATEMENT...15

    1.4 CONCLUSION.16

    CHAPTER 2: LITERATURE SURVEY

    2.1 INTRODUCTION.....18

    2.2 FEATURES OF PROGRAMMING LANGUANGE USED...20

    2.3 VISION BASED ANALYSIS...37

    2.4 PERCEPTRON LEARNING RULE.38

    2.5 CONCLUSION..40

    CHAPTER 3: SYSTEM ANALYSIS AND DESIGN

    3.1 INTRODUCTION.....42

    3.2 ALGORITHMS.....44

    3.3 USE CASE DIAGRAM........46

    3.3 FLOW CHARTS...48

    3.4 BLOCK DIAGRAM..50

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 4

    3.5 HARDWARE REQUIREMENTS.51

    3.6 SOFTWARE REQUIREMENTS..52

    3.7 CONCLUSION..53

    4.1 INTRODUCTION.55

    4.2 DATABASE CONNECTIVITY...56

    4.3 PHASES OF IMPLEMENTATION.57

    4.4 SNAP SHOTS...62

    4.5 CONCLUSION.....68

    5.1 FUTURE WORK..70

    5.2 REFERENCES..71

    5.3 CONCLUSION..74

    FIGURES CONTAINED IN THE DOCUMENT..75

    TABLES CONTAINED IN THE DOCUMENT....76

    Chapter 4: IMPLEMENTATION

    Chapter 5: FUTURE WORK AND REFERENCES

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 5

    CHAPTERONE

    INTRODUCTION

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 6

    1.1 Introduction

    Fig 1:- A child being detecting hand location and movement

    In the present day framework of interactive, intelligent computing, an efficient humancomputer

    interaction is assuming utmost importance. Gesture recognition can be termed as an approach in

    this direction. It is the process by which the gestures made by the user are recognized by the

    receiver.

    Gestures are expressive, meaningful body motions involving physical movements of the fingers,

    hands, arms, head, face, or body with the intent of:

    Conveying meaningful information or

    Interacting with the environment.

    They constitute one interesting small subspace of possible human motion. A gesture may also be

    perceived by the environment as a compression technique for the information to be transmitted

    elsewhere and subsequently reconstructed by the receiver.

    Gesture recognition can be seen as a way for computers to begin to understand human

    body language, thus building a richer bridge between machines and humans than primitive text

    user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to

    keyboard and mouse. Gesture recognition enables humans to interface with the machine (HMI)

    and interact naturally without any mechanical devices.

    Gesture recognition can be conducted with techniques from computer vision and image

    processing.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 7

    Why do we need Gesture Recognition?

    The goal of virtual environments (VE) is to provide natural, efficient, powerful, and

    flexible interaction. Gesture as an input modality can help meet these requirements because

    Human gestures are natural and flexible, and may be efficient and powerful, especially as

    compared with alternative interaction modes.

    The traditional two-dimensional (2D), keyboard- and mouse- oriented graphical user

    interface (GUI) is not well suited for virtual environments. Synthetic environments provide the

    opportunity to utilize several different sensing modalities and technologies and to integrate them

    into the user experience. Devices which sense body position and orientation, direction of gaze,

    speech and sound, facial expression, galvanic skin response, and other aspects of human

    behavior or state can be used to mediate communication between the human and the

    environment. Combinations of communication modalities and sensing devices can produce a

    wide range of unimodal and multimodal interface techniques. The potential for these techniques

    to support natural and powerful interfaces for communication in VEs appears promising.

    Gesture is used for control and navigation in CAVEs (Cave Automatic Virtual

    Environments) and in other VEs, such as smart rooms, virtual work environments, and

    performance spaces. In addition, gesture may be perceived by the environment in order to be

    transmitted elsewhere (e.g., as a compression technique, to be reconstructed at the receiver).

    Gesture recognition may also influence intentionally or unintentionally a systems model of

    the users state. Gesture may also be used as a communication backchannel (i.e., visual or verbal

    behaviors such as nodding or saying something, or raising a finger to indicate the desire to

    interrupt) to indicate agreement, participation, attention, conversation turn taking, etc. Clearly the

    position and orientation of each body part the parameters of an articulated body model would

    be useful, as well as features that are derived from those measurements, such as velocity and

    acceleration. Facial expressions are very expressive. More subtle cues such as hand tension,

    overall muscle tension, locations of self-contact and even pupil dilation may be of use.

    What are the Different types of Gesture Recognition?

    Gestures can be static (the user assumes a certain pose or configuration) or dynamic (with

    prestroke, stroke, and poststroke phases).

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 8

    Some gestures have both static and dynamic elements, as in sign languages. The

    automatic recognition of natural continuous gestures requires their temporal segmentation. The

    start and end points of a gesture, in terms of the frames of movement, both in time and in space

    are to be specified. Sometimes a gesture is also affected by the context of preceding as well as

    following gestures. Moreover, gestures are often language and culture specific.

    Gestures can broadly be of the following types:

    Gesture recognition is useful for processing information from humans which is not conveyed through speech or type. As well, there are various types of gestures which can be identified by computers.

    Sign Language Recognition: Just as speech recognition can transcribe speech to text, certain types of gesture recognition software can transcribe the symbols represented through sign language into text.

    For Socially Assistive Robotics: By using proper sensors (accelerometers and gyros) worn on the body of a patient and by reading the values from those sensors, robots can assist in patient rehabilitation. The best example can be stroke rehabilitation.

    Directional Indication through Pointing: Pointing has a very specific purpose in our society, to reference an object or location based on its position relative to ourselves. The use of gesture recognition to determine where a person is pointing is useful for identifying the context of statements or instructions. This application is of particular interest in the field of robotics.

    Control through Facial Gestures: Controlling a computer through facial gestures is a useful application of gesture recognition for users who may not physically be able to use a mouse or keyboard. Eye tracking in particular may be of use for controlling cursor motion or focusing on elements of a display.

    Alternative Computer Interfaces: Foregoing the traditional keyboard and mouse setup to interact with a computer, strong gesture recognition could allow users to accomplish frequent or common tasks using hand or face gestures to a camera.

    Immersive Game Technology: Gestures can be used to control interactions within video games to try and make the game player's experience more interactive or immersive.

    Virtual Controllers: For systems where the act of finding or acquiring a physical controller could require too much time, gestures can be used as an alternative control mechanism. Controlling secondary devices in a car, or controlling a television set are examples of such usage.

    Affective Computing: In affective computing, gesture recognition is used in the process of identifying emotional expression through computer systems.

    Remote Control: Through the use of gesture recognition, "remote control with the wave of a hand" of various devices is possible. The signal must not only indicate the desired response, but also which device to be controlled.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 9

    Hand and Arm Gestures : Recognition of hand poses, sign languages, and

    entertainment applications (allowing children to play and interact in virtual environments)

    Head and Face Gestures : Some examples are:

    a) Nodding or shaking of head;

    b) Direction of eye gaze;

    c) Raising the eyebrows;

    d) Opening the mouth to speak;

    e) Winking,

    f) Flaring the nostrils;

    g) Looks of surprise, happiness, disgust, fear, anger, sadness, contempt, etc.;

    Body Gestures : Involvement of full body motion, as in:

    a) Tracking movements of two people interacting outdoors;

    b) Analyzing movements of a dancer for generatingmatching music and graphics;

    c) Recognizing human gaits for medical rehabilitation and athletic training.

    There are many classifications of gestures, such as

    Intransitive Gestures: "The ones that have a universal language value especially for the

    expression of affective and aesthetic ideas. Such gestures can be indicative, exhortative,

    imperative, rejective, etc."

    Transitive Gestures: The ones that are part of an uninterrupted sequence of

    interconnected structured hand movements that is adapted in time and space, with the aim of

    completing a program, such as prehension."

    The classification can be based on gesture's functions as:

    Semiotic To communicate meaningful information.

    Ergotic To manipulate the environment.

    Epistemic To discover the environment through tactile experience.

    The different gestural devices can also be classified as haptic or non-haptic (haptic means

    relative to contact).

    Typically, the meaning of a gesture can be dependent on the following:

    Spatial Information : Where it occurs;

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 10

    Pathic Information : The path it takes;

    Symbolic Information : The sign it makes;

    Affective Information: Its emotional quality.

    What are the different Requirement and Challenges?

    The main requirement for gesture interface is the tracking technology used to capture

    gesture inputs and process them. Gesture-only interfaces with syntax of many gestures typically

    require precise pose tracking.

    A common technique for hand pose tracking is to instrument the hand with a glove which

    is equipped with a number of sensors which provide information about hand position,

    orientation, and flex of the fingers. The first commercially available hand tracker was the

    Dataglove. Although instrumented gloves provide very accurate results they are expensive and

    encumbering.

    Computer vision and image based gesture recognition techniques can be used

    overcoming some of the limitations. There are two different approaches to vision based gesture

    recognition; model based techniques which try to create a three-dimensional model of the users

    pose and use this for recognition, and image based techniques which calculate recognition

    features directly from the image of the pose.

    Effective gesture interfaces can be developed which respond to natural gestures,

    especially dynamic motion. This system must respond to user position using two proximity

    sensors, one vertical, and the other horizontal. There must be a direct mapping of the motion to

    continuous feedback, enabling the user to quickly build a mental model of how to use the device.

    There are many challenges associated with the accuracy and usefulness of gesture

    recognition software. For image-based gesture recognition there are limitations on the equipment

    used and image noise. Images or video must be under consistent lighting, or in the same location.

    Items in the background or distinct features of the users should not make recognition difficult.

    The variety of implementations for image-based gesture recognition may also cause issue

    for viability of the technology to general usage. For example, an algorithm calibrated for one

    camera may not work for a different camera. These criteria must be considered for viability of

    the technology. The amount of background noise which causes tracking and recognition

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 11

    difficulties, especially when occlusions (partial and full) occur must be minimized. Furthermore,

    the distance from the camera, and the camera's resolution and quality, which causes variations in

    recognition and accuracy, should be considered. In order to capture human gestures by visual

    sensors, robust computer vision methods are also required, for example for hand tracking and

    hand posture recognition or for capturing movements of the head, facial expressions or gaze

    direction.

    For the past few years, the common input computer devices did not change a lot. This means, the communicating with computers at this moment are limited to mouse, keyboard, track ball, web-cam, light pen and etc. This is happened because the existing of input devices is adequate in order to perform most of the function that computer able to do. In other hand, the new application/software is constantly introduced into our market. This software is able to perform multiples of functions using just the common input computer devices.

    Vision based interfaces are feasible and popular at this moment because the computer is able to communicate with user using webcam. This means, user able to give command to thecomputer by just showing some actions in front of the webcam without typing keyboard and clicking mouse button. Hence, users are able to perform human-machine interaction (HMI) with these user-friendlier features. Eventually, this will deploy new commands that are not possible with current computer input devices.

    Lately, there has been a surge in interest in recognizing human hand gestures. Hand gesture recognition has several of applications such as computer games, gaming machines, as mouse replacement and machinery control (e.g. crane, surgery machines). Moreover, controlling computers via hand gestures can make many applications work more intuitive than using mouse, keyboard or other input devices.

    Background

    Research on hand gestures can be classified into three categories. The first category,glove based analysis, employs sensors (mechanical or optical) attached to a glove that transduces finger flexions into electrical signals for determining the hand posture. The relative position of the hand is determined by an additional sensor. This sensor is normally a magnetic or an acoustic sensor attached to the glove. For some dataglove applications, look-up table software toolkits are provided with the glove to be used for hand posture recognition. The second category, vision based analysis, is based on the way human beings perceive information about their surroundings, yet it is probably the most difficult to implement in a satisfactory way. Several different approaches have been tested so far. One is to build a three-dimensional model of the human hand. The model is matched to images of the hand by one or more cameras, and parameters corresponding to palm orientation and joint angles are estimated. These parameters are then used to perform gesture classification. A hand gesture analysis system based on a three-dimensional

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 12

    hand skeleton model with 27 degrees of freedom was developed by Lee and Kunii. They incorporated five major constraints based on the human hand kinematics to reduce the model parameter space search. To simplify the model matching, specially marked gloves were used.The third category, analysis of drawing gestures, usually involves the use of a stylus as an input device. Analysis of drawing gestures can also lead to recognition of written text. The vast majority of hand gesture recognition work has used mechanical sensing, most often for direct manipulation of a virtual environment and occasionally for symbolic communication. Sensing the hand posture mechanically has a range of problems, however, including reliability, accuracy and electromagnetic noise. Visual sensing has the potential to make gestural interaction more practical, but potentially embodies some of the most difficult problems in machine vision. The hand is a non-rigid object and even worse self-occlusion is very usual.

    Full ASL recognition systems (words, phrases) incorporate datagloves. Takashi andKishino discuss a Dataglove-based system that could recognize 34 of the 46 Japanese gestures (user dependent) using a joint angle and hand orientation coding technique.

    From their paper, it seems the test user made each of the 46 gestures 10 times to providedata for principle component and cluster analysis. A separate test was created from five iterations of the alphabet by the user, with each gesture well separated in time. While these systems are technically interesting, they suffer from a lack of training. Excellent work has been done in support of machine sign language recognition by Sperling and Parish, who have done careful studies on the bandwidth necessary for a sign conversation using spatially and temporally sub-sampled images. Point light experiments (where lights are attached to significant locations on the body and just these points are used for recognition), have been carried out by Poizner. Most systems to date study isolate/static gestures. In most of the cases those are fingerspelling signs.

    Object Recognition

    Large Object Tracking

    In some interactive applications, the computer needs to track the position or orientation of a hand that is prominent in the image. Relevant applications might be computer games, orinteractive machine control. In such cases, a description of the overall properties of the image may be adequate. Image moments, which are fast to compute, provide a very coarse summary of global averages of orientation and position. If the hand is on a uniform background, this method can distinguish hand positions and simple pointing gestures. The large-object-tracking method makes use of a low-cost detector/processor to quickly calculate moments. This is called the artificial retina chip. This chip combines image detection with some low-level image processing (named artificial retina by analogy with those combined abilities of the human retina). The chip can compute various functions useful in the fast algorithms for interactive graphics applications.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 13

    Shape recognition

    Most applications, such as recognizing particular static hand signal, require a richerdescription of the shape of the input object than image moments provide. If the hand signals fell in a predetermined set, and the camera views a close-up of the hand, we may use an example-based approach, combined with a simple method top analyze hand signals called orientation histograms.

    These example-based applications involve two phases; training and running. In thetraining phase, the user shows the system one or more examples of a specific hand shape. The computer forms and stores the corresponding orientation histograms. In the run phase, the computer compares the orientation histogram of the current image with each of the stored templates and selects the category of the closest match, or interpolates between templates, as appropriate. This method should be robust against small differences in the size of the hand but probably would be sensitive to changes in hand orientation.

    Goals

    The scope of this project is to create a method to recognize hand gestures, based on apattern recognition technique developed by McConnell; employing histograms of localorientation. The orientation histogram will be used as a feature vector for gesture classification and interpolation. High priority for the system is to be simple without making use of any special hardware. All the computation should occur on a workstation or PC. Special hardware would beused only to digitize the image (scanner or digital )

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 14

    1.2 Previous Related Work

    Actually, there are much of previous work has been done in this area. Most of researchers have tried to build presentation controlling systems either use sensor technology or computer vision approach.

    Charade is the seminal work that uses free hand- gestures to control presentation in real world. The system defined set of gesture commands including advancing slides, highlighting text, returning to content and etc. The system uses data glove to detect the motion of different gestures and classify each of them. However, the author also mentioned that purely relying on the data glove is ultimately not viable for day-to-day use of the system.

    Another system proposed in uses laser pointer to control presentation which utilize the computer vision techniques. The camera simply detects the position of the laser point on the projection screen and further controls the contents within this slide. They proposed self-calibrating mechanism which automatically detects the mapping between a point on the projection screen and the corresponding pixel in the source image that projected to that point on the screen. This technique enables users to put camera in any place in the room only if it covers the whole projection screen. The system only supports functionalities such as advancing slides, highlighting text and annotating on the slide. Our system uses the same hardware devices but support more functionalities as well as hand gestures to control content of slides. The hand gestures are simply detected by dynamic movements of laser points on the projection screen, which purely uses computer vision techniques.

    Maestro is another recently built system by University of Waterloo. The system uses hands to control presentation instead of using laser pointer. However, in order to let camera detect hand gestures, presenters have to put their hands on the surface of projection screen, more specifically within the staging area. This generally causes several problems which are already mentioned in this paper. For example, presenters have to stay besides the projection screen and lack of freedom of moving from one side of screen to the other. Staying in the center of screen might cause constant activities in the slides behind them.

    After summarizing the advantages and disadvantages of each system, we plan to build system that is purely computer vision based, provides more flexibilities for users, without restricting users to stand besides the slides and provides more gestures that frequently used by presenters.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 15

    1.3 Problem Statement

    In this paper, we will build gesture recognition system called GPPT, which can control the slides by using different command gestures, such as moving to the next or previous slide, zooming in or zooming out, editing and highlighting slide contents, and etc. Only video camera and infrared rays are needed to locate and recognize hand gestures.

    Generally, the system includes two sub-systems. The first one is a vision-based hand-tracking and gesture recognition system, which aims to track humans hands and recognizeskinds of humans command gestures. The second one is a Power Point renderer system, which can online modify the ppt file according to users gesture commands.

    The main objective of this project is to investigate the robustness of identity verification based on a sequence of static hand postures or gestures.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 16

    1.4 Conclusion

    Thus we have studied what we mean by Hand Gesture Recognition and what were the problem statement and previous related work with Hand Gesture Recognition.

    Thus, we have inferred that not everything is perfect and that not every software is secure and dont have any drawback. So this is an effort to overcome the drawbacks of other software and also to provide a better user friendly GUI that would be simple to use and easily understood by a normal user and to also make a software that helps in better management of code, and also saves

    user time in writing and finding the errors.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 17

    CHAPTERTWO

    LITERATURESURVEY

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 18

    2.1 Introduction

    A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely.

    A programming language is a notation for writing programs, which are specifications of a computation or algorithm. Some, but not all, authors restrict the term "programming language" to those languages that can express all possible algorithms. Traits often considered important for what constitutes a programming language include:

    Function and target: A computer programming language is a language used to write computer programs, which involve a computer performing some kind of computationor algorithm and possibly control external devices such as printers, disk drives, robots, and so on. For example PostScript programs are frequently created by another program to control a computer printer or display. More generally, a programming language may describe computation on some, possibly abstract, machine. It is generally accepted that a complete specification for a programming language includes a description, possibly idealized, of a machine or processor for that language. In most practical contexts, a programming language involves a computer; consequently programming languages are usually defined and studied this way. Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines.

    Abstractions: Languages usually contain abstractions for defining and manipulating datastructures or controlling the flow of execution. The practical necessity that a programming language support adequate abstractions is expressed by the abstraction principle; this principle is sometimes formulated as recommendation to the programmer to make proper use of such abstractions.

    Expressive power: The theory of computation classifies languages by the computations they are capable of expressing. All Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL and Charity are examples of languages that are not Turing complete, yet often called programming languages.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 19

    A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, there are some programming languages which are more graphical in nature, using visual relationships between symbols to specify a program.

    The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics (either formal or hard-coded in a reference implementation). Since most languages are textual

    The earliest programming languages predate the invention of the computer, and were used to direct the behavior of machines such as Jacquard looms and player pianos. Thousands ofdifferent programming languages have been created, mainly in the computer field, with many more being created every year. Most programming languages describe computation in an imperative style, i.e., as a sequence of commands, although some languages, such as those that support functional programming or logic programming, use alternative forms of description.

    The description of a programming language is usually split into the two components of syntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, the C programming language is specified by an ISO Standard), while other languages, such as Perl 5 and earlier, have a dominant implementation that is used as a reference.

    There is no need to argue in favor of concise, clear, complete, consistent , descriptions of programming languages , nor to recite the cost in time, energy, money and effectiveness which is incurred when a description falls short of these standards. Reliable, high-quality computer programming is impossible without a clear and precise understanding of the language in which the programs are written this being true quite independently of the merits of the language as a language.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 20

    2.2 Features of Programming Language used

    C Sharp (C#):

    C#(pronounced see sharp) is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional , generic, object-oriented (class-based), and component-oriented programming disciplines.

    It was developed by Microsoft within its .NET initiative and later approved as a standard by ECMA (ECMA-334) and ISO (ISO/IEC 23270:2006). C# is one of the programming languages designed for the Common Language Infrastructure.

    C# is intended to be a simple, modern, general-purpose, object-oriented programming language. Its development team is led by Anders Hejlsberg. The most recent version is C# 4.0, which was released on April 12, 2010.

    C-sharp musical note:

    The name "C sharp" was inspired by musical notation where a sharp indicates that the written note should be made a semitone higher in pitch. This is similar to the language name of C++, where "++" indicates that a variable should be incremented by 1.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 21

    Versions

    In the course of its development, the C# language has gone through several versions:

    VersionLanguage specification

    Date .NET FrameworkVisual StudioECMA ISO/IEC Microsoft

    C# 1.0

    December 2002

    April 2003

    January 2002

    January 2002

    .NET Framework1.0

    Visual Studio .NET 2002

    C# 1.2October 2003

    April 2003 .NET Framework1.1

    Visual Studio .NET 2003

    C# 2.0 June 2006September 2006

    September 2005

    November 2005

    .NET Framework2.0Visual Studio 2005

    C# 3.0

    None

    August 2007

    November 2007

    .NET Framework 2.0 (Except LINQ/Query Extensions)

    .NET Framework 3.0 (Except LINQ/Query Extensions)

    .NET Framework 3.5

    Visual Studio 2008

    Visual Studio 2010

    C# 4.0 April 2010 April 2010 .NET Framework 4Visual Studio 2010

    C# 5.0 N/AFebruary 2012

    .NET Framework 4.5Visual Studio 11

    Table 1: Version of C#

    The Microsoft C# 2.0 specification document only contains the new 2.0 features. For older features use the 1.2 specification above.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 22

    As of December 2010, no ECMA and ISO/IEC specifications exist for C# 3.0 and 4.0.

    Summary of versions

    C# 2.0 C# 3.0 C# 4.0 C# 5.0 (planned) Future

    Featuresadded

    Generics

    Partial types

    Anonymous methods

    Iterators

    Null abletypes

    Private setters (properties)

    Method group conversions (delegates)

    Implicitly typed local variables

    Object and collection initializers

    Auto-Implemented properties

    Anonymous types

    Extension methods

    Query expressions

    Lambda expressions

    Expression trees

    Dynamic binding

    Named and optional arguments

    Generic co-and contra variance

    Windows Runtimesupport

    Asynchronous methods

    Caller info attributes

    Compiler-as-a-service

    ("Roslyn")

    Table 2: Summary of Version of C#

    Some notable features of C# that distinguish it from C and C++ (and Java, where noted) are:

    It has no global variables or functions. All methods and members must be declared within classes. Static members of public classes can substitute for global variables and functions.

    Local variables cannot shadow variables of the enclosing block, unlike C and C++. Variable shadowing is often considered confusing by C++ text.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 23

    C# supports a strict Boolean data type, bool. Statements that take conditions, such as while and if, require an expression of a type that implements the true operator, such as the Boolean type. While C++ also has a Boolean type, it can be freely converted to and from integers, and expressions such as if (a) require only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this "integer meaning true or false" approach, on the grounds that forcing programmers to use expressions that return exactlybool can prevent certain types of common programming mistakes in C or C++ such as if (a = b) (use of assignment = instead of equality ==).

    In C#, memory address pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe object references, which always either point to a "live" object or have the well-defined null value; it is impossible to obtain a reference to a "dead" object (one that has been garbage collected), or to a random block of memory.

    An unsafe pointer can point to an instance of a value-type, array, string, or a block of memory allocated on a stack. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but it cannot dereference them.

    Managed memory cannot be explicitly freed; instead, it is automatically garbage collected. Garbage collection addresses the problem of memory leaks by freeing the programmer of responsibility for releasing memory that is no longer needed.

    In addition to the try...catch construct to handle exceptions, C# has a try...finally construct to guarantee execution of the code in the finally block.

    Multiple inheritance is not supported, although a class can implement any number of interfaces. This was a design decision by the language's lead architect to avoid complication and simplify architectural requirements throughout CLI.

    C#, like C++, but unlike Java, supports operator overloading. C# is more type safe than C++. The only implicit conversions by default are those that

    are considered safe, such as widening of integers. This is enforced at compile-time, during JIT, and, in some cases, at runtime. No implicit conversions occur between Booleans and integers, nor between enumeration members and integers (except for literal 0, which can be implicitly converted to any enumerated type). Any user-defined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors and conversion operators, which are both implicit by default. Starting with version 4.0, C# supports a "dynamic" data type that enforces type checking at runtime only.

    Enumeration members are placed in their own scope. C# provides properties as syntactic sugar for a common pattern in which a pair of

    methods, accessor (getter) and mutator (setter) encapsulate operations on a single attribute of a class.

    Checked exceptions are not present in C# (in contrast to Java). This has been a conscious decision based on the issues of scalability and versionability.

    Though primarily an imperative language, since C# 3.0 it supports functional programming techniques through first-class function objects and lambda expressions.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 24

    Common type system:

    C# has a unified type system. This unified type system is called Common Type System (CTS).

    A unified type system implies that all types, including primitives such as integers, are subclasses of the System. Object class. For example, every type inherits a ToString() method.

    Categories of data types

    CTS separate data types into two categories:

    1. Value types2. Reference types

    Both type categories are extensible with user-defined types.

    Generics:

    Generics were added to version 2.0 of the C# language. Generics use type parameters, which make it possible to design classes and methods that do not specify the type used until the class or method is instantiated. The main advantage is that one can use generic type parameters to create classes and methods that can be used without incurring the cost of runtime casts or boxing operations, as shown here:

    // Declare the generic class.public class GenericList{ void Add(T input) { }}

    class TestGenericList{

    private class ExampleClass { } static void Main() { // Declare a list of type int. GenericList list1 = new GenericList();

    // Declare a list of type string. GenericList list2 = new GenericList();

    // Declare a list of type ExampleClass. GenericList list3 = new GenericList(); }}

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 25

    Preprocessor:

    C# features "preprocessor directives"(though it does not have an actual preprocessor) based on the C preprocessor that allow programmers to define symbols, but not macros. Conditionals such as #if, #endif, and #else are also provided. Directives such as #region give hints to editors for code folding.

    public class Foo{ #region Constructors public Foo() {} public Foo(int firstParam) {} #endregion #region Procedures public void IntBar(int firstParam) {} public void StrBar(string firstParam) {} public void BoolBar(bool firstParam) {} #endregion

    }

    Code comments:

    C# utilizes a double forward slash (//) to indicate the rest of the line is a comment. This is inherited from C++.

    public class Foo{ // a comment public static void Bar(int firstParam) {} // also a comment

    }

    Multi-line comments can be indicated by a starting forward slash/asterisk (/*) and ending asterisk/forward slash (*/). This is inherited from standard C.

    public class Foo{

    /* A Multi-Line comment */ public static void Bar(int firstParam) {}

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 26

    }

    XML documentation system:

    C#'s documentation system is similar to Java's Javadoc, but based on XML. Two methods of documentation are currently supported by the C# compiler.Single-line documentation comments, such as those commonly found in Visual Studio generated code, are indicated on a line beginning with ///.

    public class Foo{ /// A summary of the method. /// A description of the parameter. /// Remarks about the method. public static void Bar(int firstParam) {}

    }

    Multi-line documentation comments, while defined in the version 1.0 language specification, were not supported until the .NET 1.1 releases. These comments are designated by a starting forward slash/asterisk/asterisk (/**) and ending asterisk/forward slash (*/).

    public class Foo{ /** A summary of the method. * A description of the parameter. * Remarks about the method. */ public static void Bar(int firstParam) {}

    }

    Note: There are some stringent criteria regarding white space and XML documentation when using the forward slash/asterisk/asterisk (/**) technique.This code block:

    /*** * A summary of the method.*/

    produces a different XML comment from this code block:

    /*** A summary of the method.*/

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 27

    Syntax for documentation comments and their XML markup is defined in a non-normative annexe of the ECMA C# standard. The same standard also defines rules for processing of such comments, and their transformation to a plain XML document with precise rules for mapping of CLI identifiers to their related documentation elements. This allows any C# IDE or other development tool to find documentation for any symbol in the code in a certain well-defined way.

    Libraries:

    The C# specification details a minimum set of types and class libraries that the compiler expects to have available. In practice, C# is most often used with some implementation of the CommonLanguage Infrastructure (CLI), which is standardized as ECMA-335 Common Language Infrastructure (CLI).

    DLL (Data Link Library):

    A DLL is a library that contains code and data that can be used by more than one program at the same time. For example, in Windows operating systems, the Comdlg32 DLL performs common dialog box related functions. Therefore, each program can use the functionality that is contained in this DLL to implement an Open dialog box. This helps promote code reuse and efficient memory usage.

    By using a DLL, a program can be modularized into separate components. For example, an accounting program may be sold by module. Each module can be loaded into the main program at run time if that module is installed. Because the modules are separate, the load time of the program is faster, and a module is only loaded when that functionality is requested.

    Additionally, updates are easier to apply to each module without affecting other parts of the program. For example, you may have a payroll program, and the tax rates change each year. When these changes are isolated to a DLL, you can apply an update without needing to build or install the whole program again.

    The following list describes some of the files that are implemented as DLLs in Windows operating systems:

    ActiveX Controls (.ocx) files:

    An example of an ActiveX control is a calendar control that lets you select a date from a calendar.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 28

    Control Panel (.cpl) files:

    An example of a .cpl file is an item that is located in Control Panel. Each item is a specialized DLL.

    Device driver (.drv) files:

    An example of a device driver is a printer driver that controls the printing to a printer.

    DLL advantages:

    The following list describes some of the advantages that are provided when a program uses a DLL:

    Uses fewer resources:

    When multiple programs use the same library of functions, a DLL can reduce the duplication of code that is loaded on the disk and in physical memory. This can greatly influence the performance of not just the program that is running in the foreground, but also other programs that are running on the Windows operating system.

    Promotes modular architecture:

    A DLL helps promote developing modular programs. This helps you develop large programs that require multiple language versions or a program that requires modular architecture. An example of a modular program is an accounting program that has many modules that can be dynamically loaded at run time.

    Eases deployment and installation:

    When a function within a DLL needs an update or a fix, the deployment and installation of the DLL does not require the program to be relinked with the DLL. Additionally, if multiple programs use the same DLL, the multiple programs will all benefit from the update or the fix. This issue may more frequently occur when you use a third-party DLL that is regularly updated or fixed.

    DLL dependencies:When a program or a DLL uses a DLL function in another DLL, a dependency is created.

    Therefore, the program is no longer self-contained, and the program may experience problems if the dependency is broken. For example, the program may not run if one of the following actions occurs:

    A dependent DLL is upgraded to a new version.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 29

    A dependent DLL is fixed.

    A dependent DLL is overwritten with an earlier version.

    A dependent DLL is removed from the computer.

    These actions are generally known as DLL conflicts. If backward compatibility is not enforced, the program may not successfully run.

    The following list describes the changes that have been introduced in Microsoft Windows 2000 and in later Windows operating systems to help minimize dependency issues:

    Windows File Protection:

    In Windows File Protection, the operating system prevents system DLLs from being updated or deleted by an unauthorized agent. Therefore, when a program installation tries to remove or update a DLL that is defined as a system DLL, Windows File Protection will look for a valid digital signature.

    Private DLLs:

    Private DLLs let you isolate a program from changes that are made to shared DLLs. Private DLLs use version-specific information or an empty .local file to enforce the version of the DLL that is used by the program. To use private DLLs, locate your DLLs in the program root folder. Then, for new programs, add version-specific information to the DLL. For old programs, use an empty .local file. Each method tells the operating system to use the private DLLs that are located in the program root folder.

    DLL development:This section describes the issues and the requirements that you should consider when you

    develop your own DLLs.

    Types of DLLs:

    When you load a DLL in an application, two methods of linking let you call the exported DLL functions. The two methods of linking are load-time dynamic linking and run-time dynamic linking.

    Load-time dynamic linking:

    In load-time dynamic linking, an application makes explicit calls to exported DLL functions like local functions. To use load-time dynamic linking, provide a header (.h) file and an import library (.lib) file when you compile and link the application. When you do this, the linker will provide the system with the information that is required to load the DLL and resolve the exported DLL function locations at load time.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 30

    Run-time dynamic linking:

    In run-time dynamic linking, an application calls either the Load Library function or the Load Library Ex function to load the DLL at run time. After the DLL is successfully loaded, you use the GetProcAddress function to obtain the address of the exported DLL function that you want to call. When you use run-time dynamic linking, you do not need an import library file.

    The following list describes the application criteria for when to use load-time dynamic linking and when to use run-time dynamic linking:

    Startup performance: If the initial startup performance of the application is important, you should use run-time dynamic linking.

    Ease of use: In load-time dynamic linking, the exported DLL functions are like local functions. This makes it easy for you to call these functions.

    Application logic: In run-time dynamic linking, an application can branch to load different modules as required. This is important when you develop multiple-language versions.

    The DLL entry point:When you create a DLL, you can optionally specify an entry point function. The entry

    point function is called when processes or threads attach themselves to the DLL or detached themselves from the DLL. You can use the entry point function to initialize data structures or to destroy data structures as required by the DLL. Additionally, if the application is multithreaded, you can use thread local storage (TLS) to allocate memory that is private to each thread in the entry point function. The following code is an example of the DLL entry point function.

    BOOL APIENTRY DllMain(HANDLE hModule, // Handle to DLL module

    DWORD ul_reason_for_call, // Reason for calling functionLPVOID lpReserved ) // Reserved{

    switch ( ul_reason_for_call ){

    case DLL_PROCESS_ATTACHED:// A process is loading the DLL.

    break;case DLL_THREAD_ATTACHED:

    // A process is creating a new thread.break;case DLL_THREAD_DETACH:

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic

    break;case DLL_PROCESS_DETACH:

    break;}

    return TRUE;}

    When the entry point function returns ayou are using load-time dynamic linking. If you are using runindividual DLL will not load.

    The entry point function should only perform simple call any other DLL loading or termination functions. For example, in the entry point function, you should not directly or indirectly call theEx function. Additionally, you shoulterminating.

    VB.NET:

    Visual Basic .NETlanguage that can be viewed as an evolution of the classicimplemented on the .NET Frameworkof Visual Basic: Microsoft Visual Studio 2010Express Edition 2010, which is free of charge

    .NET

    The VB.NET language accesses the powerful types in the .NET Framework. It has a

    distinctive syntax form. Knowledge of this language helps many developers who primarily use

    other languages. VB.NET has features nearly equivalent to the C# language. It has lots of

    expressive power.

    Shah & Anchor Kutchhi Polytechnic

    // A thread exits normally.break;case DLL_PROCESS_DETACH:

    // A process unloads the DLL.break;

    entry point function returns a FALSE value, the application will not start if time dynamic linking. If you are using run-time dynamic linking, only the

    The entry point function should only perform simple initialization tasks and should not call any other DLL loading or termination functions. For example, in the entry point function, you should not directly or indirectly call the Load Library function or the Load

    function. Additionally, you should not call the Free Library function when the process is

    Visual Basic .NET (VB.NET), is an object-oriented computer programming that can be viewed as an evolution of the classic Visual Basic

    .NET Framework. Microsoft currently supplies two major implementations Microsoft Visual Studio 2010, which is commercial software

    free of charge.

    .NET VB.NET

    language accesses the powerful types in the .NET Framework. It has a

    distinctive syntax form. Knowledge of this language helps many developers who primarily use

    other languages. VB.NET has features nearly equivalent to the C# language. It has lots of

    31

    // A thread exits normally.

    // A process unloads the DLL.

    value, the application will not start if time dynamic linking, only the

    initialization tasks and should not call any other DLL loading or termination functions. For example, in the entry point function,

    Load Libraryfunction when the process is

    computer programming (VB), which is

    . Microsoft currently supplies two major implementations and Visual Basic

    language accesses the powerful types in the .NET Framework. It has a

    distinctive syntax form. Knowledge of this language helps many developers who primarily use

    other languages. VB.NET has features nearly equivalent to the C# language. It has lots of

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 32

    This section shows example VB programs. It covers VB.NET syntax, keywords and

    performance.

    Console:

    As an introduction, let's run a program that uses the Console.WriteLine subroutine and

    prints Hello world to the screen. The program is contained in a module named Module1. The Sub

    Main is the entry point of the program

    Program for introduction [VB.NET]

    Module Module1

    Sub Main()

    ' Say hi in VB.NET.

    Console.WriteLine("Hello world")

    End Sub

    End Module

    Output

    Hello world

    Numbers

    Numbers are often stored as Integer types in VB.NET programs. If you have data in a

    String that you want to convert into an Integer, you must first parse it. We cover various aspects

    of numbers and mathematical processing in the language. Chars are essentially numbers in

    VB.NET as well.

    Enums:

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 33

    Enums are an excellent choice to use in your VB.NET code base if you have certain

    magical constants to embed. They greatly improve the documentation quality because they are

    always labeled. We describe Enums in the language.

    Convert

    Converting data in the VB.NET language requires a lot of knowledge of what functions

    are available in the .NET Framework. We elaborate upon conversions in the VB.NET

    programming language.

    VB.NET also provides a host of useful built-in functions that you can use. These

    typically provide low-level functionality and conversions.

    List:

    You often need to store many elements in a resizable array. You might not even know

    how many elements are needed when you begin processing. The List and ArrayList types are an

    excellent choice for programs with this requirement. The List type is best.

    Collections:

    There are other collection types available for use in your VB.NET programs. Some of

    these, including the HashSet and Tuple types, are generics. They can be used to simplify certain

    problems.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 34

    Interface:

    Interfaces in the VB.NET language introduce an essential level of abstraction. You can

    act upon many different types through one Interface type. This represents polymorphism. This

    can help simplify your program.

    Syntax

    We cover various aspects of VB.NET, the looping constructs, as well as the subroutine

    syntax. These may be helpful as an introduction to VB.NET for people familiar with other

    languages.

    Loops:

    Here are the looping constructs in the VB.NET language. The For Each construct is

    probably the least error-prone but is not always available. We also see a simple example of the

    For-loop construct directly on this page.

    Program that demonstrates For loop [VB.NET]

    Module Module1

    Sub Main()

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 35

    For i As Integer = 0 To 3

    Console.WriteLine(i)

    Next

    End Sub

    End Module

    Output:

    0

    1

    2

    3

    Recursion:

    When a function calls itself, recursion occurs. The VB.NET language supports recursion.

    The ByRef keyword can be very useful when implementing recursive algorithms.

    Data

    One of the top uses for VB.NET programs is to process data stored in databases. You can

    use many different types in the .NET Framework to accomplish this task. The DataGridView is

    one of the most popular choices. It works well with the DataTable.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 36

    Summary:

    The VB.NET language exposes the power of the .NET Framework to many

    programmers. With the Framework methods, you can avoid writing a lot of tedious code. This

    leads to much faster development. It leads to more robust software applications.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 37

    2.3 Vision based Analysis

    There are few technologies already using vision based analysis system.

    Example, Hand gesture recognition which using Vision-Based approaches use only the vision sensor (camera) for understand musical conduction action. This system works by when the conductor uses only one-side hand and must in the view range of camera. The conductor may indicate 4 timing patterns with 3 tempos by his/her hand motion. When the camera capture the image of hand gesture, the system extract the human hand region which is the region of interest (ROI) using the intensity color information. The system is obtained the motion velocity and the direction by tracking the center of gravity (COG) of the hand region, which provides the speed of any conducting time pattern.

    While another one is Gesture-Based Interface for Home Appliance Control in Smart Home. This technology is based on HMI (Human-Machine Interface). A small advanced color camera built onto the television senses when someone enters its field of vision and searches for their hand. The machine will then interprets the hands signal such as waving up and down could alter the volume and raise their finger would switch the channel. This technology is designed by detecting the skin color of face and hand whether is matching, then commanding the hand detection/tracking algorithms which use a cascade hand motion recognizer is used for distinguishing pre-defined hand motions from the meaningless gestures.

    For this Hand Gesture Recognition project, is based on Human-Computer Interaction (HCI) technology. The computer can perform hand gesture recognition on American Sign Language (ASL). The system use .NET Toolboxes. Further explanation regarding how the images are feed into network and how the network process will be discuss on this report.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 38

    2.4 Perceptron Learning Rule

    Perceptron is one of the programs of Neural Network that learns concepts. For example, this program is able to learn to respond for the inputs that present to it whether it is True(1) or False(0), by repeatedly "studying" examples presented to it. This program is the suitable for classification and pattern recognition.

    Single perceptrons structure is quite simple. There are few inputs, depends on input data, a bias with and output. The inputs and outputs must be in binary form which the value can only be 0 or 1. Figure 3 shows Perceptron Schematic diagram.

    Figure 2: Perceptron Schematic diagram

    Inside the Perceptron layer, the input data (0 or 1) are feed into weight by multiply with it and the weight is generally a real number between 0 and 1. The value are then feed into neuron together with bias, bias is real value as well range from 0 to 1. Inside neuron, the both these values are summed up. After that, the summed values will fed into Hard-Limiter. Hard-Limiter is a function which defined the threshold values as discussed earlier. For example, f(x) = {x 0.5

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 39

    0 or x 0.5 1}. This mean we set the threshold value of Hard-Limiter function to 0.5, if the sum of the input multiplied by the weight is less than 0.5, the limiter function would return value 0 else if the sum of input multiplied by the weight is more than 0.5, the limiter function would return value 1.

    Once the value is obtained, the next step process is adjusting the weights. The way of Perceptron learning is through modifying its weights.

    The value obtained from the limiter function is also known as actual output. Perceptron adjust its weights by the difference between the desired output and the actual output. This can be written as:

    Change in Weight i = Current Value of Input i (Desired Output - Current Output)

    Perceptron will continue adjust its weights until the actual output value is no difference with desired output value or with minimum difference value.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 40

    2.5 Conclusion

    Thus, from this chapter, we learnt about the computer language we are going to deal with i.e. Microsoft Visual .NET, C # and DLL and the advantages over other languages. We also learned about the different important features of .NET and C # i.e. the platform on which we are going to build our system. The database where the images will be stored is also defined here only and how the image will process from the database to our project will be shown in next chapter.

    This chapter also covers the important domains connected with our system and a detailed description on them. We also compared our algorithm with the passed works.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 41

    CHAPTERTHREE

    SYSTEMANALYSIS &

    DESIGN

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 42

    3.1 Introduction

    In This Chapter, we explain our algorithm on using various uml diagrams like flowchart, activity diagrams, use cases.

    Then we discuss the database design of our algorithm and the hardware and the software requirements of our system.

    Finally we conclude the chapter.

    Taking into consideration that new system was to be developed, the next phase of the system development system analysis. Analysis involved a detailed study of the current systems leading to specification of a new system.

    Analysis is a detailed study of various operation performed by a system and their relationships within and outside the system. During Analysis data are collected from the available files, decision points and transactions handled by the present system.

    In system analysis more emphasis is given to understanding the details of an existing system or a proposed one and then deciding whether the existing system needs improvement.

    Thus, systems analysis is the process or art of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. One could see it as the application of systems theory to product development. There is some overlap and synergy with the disciplines of systems analysis, systems architecture and systems engineering.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 43

    Component of analysis model:

    Analysis Model

    This model shows different elements Such as :

    Scenario-based elements : This element mainly deals with use case diagrams and other diagrams too Activity diagram and also Swim Lanes.

    Flow Oriented elements :This element mainly deals with Data flow diagrams But other diagrams too come under this as Control flow diagrams and also processing narratives.

    Behavioral elements : This element consist of displaying the current behavior of running project in forms of state diagrams and sequence diagrams.

    Class-based elements: This element is somewhat as Scenario-based elements.

    Scenario-based elementsUse case DiagramsActivity DiagramsSwim lanes

    Flow oriented elementsData Flow DiagramsControl Flow DiagramsProcessing Narratives

    Class-Based elementsUse case DiagramsActivity Diagrams

    Behavioral elementsState DiagramsSequence Diagrams

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 44

    3.2Algorithms

    1. Perceptron Convergence Algorithms

    Perceptron is so powerful because they can be trained behavior of certain way. Error-correction learning algorithm is part of Perceptron learning process. Perceptron will have converged (a technical name for learned) for to behave that way.Figure 10 below shows signal-flow graph how error-correction learning algorithm works in a single-layer Perceptron. In this case the threshold (n) is treated as synaptic weight connected to a fixed input equal to -1.

    Figure 3: Perceptron Signal Flow Graph

    Firstly, define the (p+1)-by-1 input vector:x(n) = [-1, x1 (n), x2 (n),.....xp (n)]

    T

    After that, define the (p+1)-by-1 weight vector:w(n) = [(n), w1(n), w2(n),.....w p (n)]

    T

    Here are some variable and parameters used in the convergence algorithm for further explanation: [23]

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 45

    1. (n) =threshold

    2. y(n) = actual response

    3. d(n) = desired response

    4. = learning rate parameter, 0< 0, sgn(u) = +1

    If u < 0, sgn(u) = -1

    4th step: Adaptation of Weight Vector Adaptation of weight vector equation: w(n+1) = w(n) + [d(n) y(n)]x(n) Where: if x(n) belongs to class C1, then d(n) = +1 If x(n) belongs to class C2, then d(n) = -1

    5th step: Increment time n by one unit then repeat back step 2.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 46

    3.3 Use Case Diagram

    Figure 4: Network Testing

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 47

    1st Step: Select test set image / Get Image from Webcam

    Once the Perceptron Network is completed trained, the network is ready for testing. First, we select test set of image which already converted into feature vector form or get image through webcam then process it into feature vector form. The image can be come from any type of hand gesture sign, not necessary to be trained hand gesture sign since this is just for testing.

    Then feed the feature vector into the trained network.

    2nd Step: Process by Perceptron Network

    Now the image in feature vector form is feed into the network. These feature vector values will go through all the adjusted weights (neurons) inside the Perceptron network and the will come out an output.

    3rd Step: Display Matched Output

    The system will display the matched output which presented in vector format. Improvement is made so the output will display both vector format and the meaning of gesture sign in graphical form.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 48

    3.4 Flowchart

    A flowchart is the diagrammatic representation of a step-by-step solution to given problem. It is a common type of a diagram that represents the algorithm or process showing the steps as boxes of various kinds, and their order by connecting these with arrows. Data is represented in these boxes, and arrow connecting them represents flow / direction of a flow of data. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.

    The advantage of the flow chart is, it depends on visualization and as it is in the form of diagram, anyone can understand the flow of code very easily.

    The better is the flowchart, the better is the software product made and if any error occurs then it is very easy to understand the vulnerable point or the cause of the threat.

    The technique allows the author to locate the responsibility for performing an action or making decision correctly, showing the responsibility of each organizational unit for different parts of a single process.

    Actually today big companies dont prefer flow chart as there representation, but this project is only a simple prototype so its efficient for us to draw the flowchart and have a look over the flow of our project.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic

    1) Hand Detection Flowchart

    Figure

    Gesture Determination:Using surrounding frames

    Compare regions with gesture patterns

    Finger Region Detection:Find connected regions (fingers/palm)

    Create bounding box from skin color

    Learn lighting and coloring

    With images of various lighting

    Shah & Anchor Kutchhi Polytechnic

    Hand Detection Flowchart:

    Figure 5: Hand Detection Flow

    Gesture Determination:Using surrounding frames

    Pattern Matching:Compare regions with gesture patterns

    Finger Region Detection:Find connected regions (fingers/palm)

    Hand Location:Create bounding box from skin color

    Calibration:Learn lighting and coloring

    Skin Color Training:With images of various lighting

    49

    Compare regions with gesture patterns

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 50

    3.5 Block Diagram

    Figure 6: Block Diagram

    This simple diagram explains the classification of image .i.e. How the project classifies between front person and background and also how its extract the person posture from image and keep background as different or treat as waste.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 51

    3.6 Hardware Requirements

    Minimum Hardware Requirements

    PROCESSOR : Pentium III 866 MHzRAM : 128 MB SD RAMMONITOR : 15 COLORHARD DISK : 20 GB SATA HDD

    Recommended Hardware Requirements

    PROCESSOR : Core2 Duo RAM : 4 GB or moreMONITOR : Any LCD or LED Color Monitor with any InchesHARD DISK : 160 GB SATA HDD or more

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 52

    3.7 Software Requirements

    Minimum Software Requirements

    OPERATING SYSTEM : Windows XP ProfessionalENVIRONMENT : .NET IDE

    Recommended Software Requirements

    OPERATING SYSTEM : Windows XP/Windows &/Windows VistaENVIRONMENT : Any IDE

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 53

    3.8 Conclusion

    Although several research efforts have been referenced in this chapter, these are just a sampling; many more have been omitted for the sake of brevity. Good sources for much of the work in gesture recognition can be found in the proceedings of the Gesture Workshops and the International Conference on Automatic Face and Gesture Recognition.

    There is still much to be done before gestural interfaces, which track and recognize human activities, can become pervasive and cost-effective for the masses. However, much progress has been made in the past decade and with the continuing march towards computers and sensors that are faster, smaller, and more ubiquitous, there is cause for optimism. As PDAs and pen-based computing continue to proliferate, pen-based 2D gestures should become more common, and some of the technology will transfer to 3D hand, head, and body gestural interfaces. Similarly, technology developed in surveillance and security areas will also find uses in gesture recognition for virtual environments.

    Thus we have studied about the analysis about the image retrieval system and we have learnt how data flows in the program and show them in the diagrammatic form.

    Therefore we also know the flow chart of our system and how to learn about them in brief.

    We also learned the different states of our system through the uses of our system through the use case diagram.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 54

    CHAPTERFOUR

    IMPLEMENTATION

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 55

    4.1 INTRODUCTION

    In this chapter we firstly discuss how the database connection of our software is carried out.

    After the development and testing has been completed, implementation of the information system begins. This phase is required to complete the system development life cycle. Implementation is done while developing a software. Implementation phase includes hardware and software requirements according to the site selected.

    During the user training we have to motivate the user to be familiar with the system. Implementation also includes testing of the software in all phases till the final product is ready. It also displays the snap shots of the GUI of how the software is actually going to look like and work as a system.

    Clients who do not have Microsoft .NET framework installed in their machines are also provided with the installation process in this chapter. Implementation is the procedure of making the software into existence after the softwares specification, design and requirements have been properly and accurately defined. Here, is the coding process, the implementation of our system and the snap shots so as to understand the steps from the very start to the end.

    We created the GUI as simple as possible so that it is easy for user to operate it.

    As per the testing phase we tried deploying our files into all the personal computers of the project members as well as on some computers of our institute and wherever we could to find the bugs and fix it.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 56

    4.2 Database Connectivity

    1. Video Database

    Video Data Bank (VDB) is an international video art distribution organization and aresource in the United States for videos by and about contemporary artists. Located in Chicago, Illinois VDB was founded at the School of the Art Institute of Chicago in 1976 at the inception of the media arts movement.

    VDB provides experimental video art, documentaries made by artists, and taped interviews with visual artists and critics for a wide range of audiences. These include micro-cinemas, film festivals, media arts centers, universities, libraries, museums, community-based workshops, public television, and cable TV Public-access television centers. Video Data Bank currently holds over 2,000 titles in distribution, by more than 400 artists, available in a variety of screening and archival video formats. It also actively publishes anthologies and curated programs of video art.

    The preservation of historic video is an ongoing project of the Video Data Bank. The total holdings, including works both in and out of distribution, include over 5,000 titles of original and in some cases, rarely seen, video art and documentaries from the late 1960s on. The VDB functions as a Department of the Art Institute of Chicago and is supported in part by awards from the National Endowment for the Arts and the Illinois Arts Council.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 57

    4.3 PHASES OF IMPLEMENTATION

    4.3.1. PHASE 1 OF IMLEMENTATION:

    Repetition testing involves doing the same operation over and over. This could be as simple as starting up and shutting down the program over and over. It could also mean repeatedly saving and loading data or repeatedly selecting the same operation. You might find a bug after only a couple repetitions or it might take thousands of attempts to reveal a problem.The main reason for doing repetition testing is to look for memory leaks. A common software problem happens when computer memory is allocated to perform a certain operation but isn't completely freed when the operation completes. The result is that eventually the program uses up memory that it depends on to work reliably. If you've ever used a program that works fine when you first start it up, but then becomes slower and slower or starts to behave erratically over time, it's likely due to a memory leak bug. Repetition testing will flush these problems out.

    Stress testing is running the software under less-than-ideal conditions low memory, low disk space, slow CPUs, slow modems, and so on. Look at your software and determine what external resources and dependencies it has. Stress testing is simply limiting them to their bare minimum. Your goal is to starve the software. It is a type of boundary condition.

    Load testing is the opposite of stress testing. With stress testing, you starve the software; with load testing, you feed it all that it can handle. Operate the software with the largest possible data files. If the software operates on peripherals such as printers or communications ports, connect as many as you can. If you're testing an Internet server that can handle thousands of simultaneous connections, do it. Max out the software's capabilities. Load it down.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 58

    4.3.2. PHASE 2 OF IMLEMENTATION:

    Preparing IR report:

    The IR report is used to supply information about the outcomes and success of the project.

    Step 1 of IR:

    Project Information:

    Project Id: CM11P07

    Project Name: HAND GESTURE RECOGNITION MOVEMENT

    Step 2 of IR:

    Summary of Project:

    In todays world major programs are made in .NET due to its portable feature so this tool provides simplicity in execution of the modules and so is very useful. So editors are needed to simplify the work of programmers. Hence many problems are solved with this.

    Problems such as:-

    1. Time to create a System2. Time to Test the System3. Check the System for no error4. Rectifying the error5. Replacing some faulty component

    These problems were very huge and hence needed to be resolved some of the system. The best way to resolve the above problems are simplified version of Editors.

    Objectives of our solution:

    1. Easy GUI to create a digital design.2. Way to pack a created design and reuse it any application3. To create a distributed System4. To provide built in tutor5. Easily upgradable6. Tested software which provides accurate outputs7. Providing handy tools for designing circuits

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 59

    Step 3 of IR:

    Outcomes of Key Project Area:

    ANALYSIS EFFORT COST

    Man Hours 250Days

    100 Per Day*250 =RS 25,000

    Documentation - RS 2,000

    Miscellaneous - RS 4,000

    TOTAL - RS 31,000

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 60

    4.3.3. PHASE 3 OF POST IMLEMENTATION:

    TIME FEASIBILITY:

    PROCEDURE DURATION

    Analysis 1 1\2 month

    Design 2 month

    Development Phase 1 month

    Testing Phase 2 month

    Documentation 1 1\2 month

    This is the complete time period taken by all the phases in our project and thus now its ready to be deployed.

    DEPLOYING OUR PROJECT:

    As our software was created only for a final year project, we need not needed the entire complex executable file, hence our project was deployed to a Microsoft .net solution file which can be executed on any machine where .NET framework is present.

    As per the testing phase we tried deploying our files to all the project members as well as on some computers of our college.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 61

    4.3.4. PHASE 4 OF IMLEMENTATION:

    MAINTENANCE:

    The software will definitely undergo change once it is delivered to the customer. There can be many reasons for this change to occur. Change could happen because of some unexpected input values into the system. In, addition the changes in the system could directly affect the software operation. Thus, we have created our software which can handle external changes. This is achieved in our project with the help of .NET being an independent to the entire platform.

    We have even provided our software group e-mail id so that if the user finds any bugs or wants any support regarding the working of the product, then he/she can contact us via e-mail or call on the helpline number provided in the About Us part on Home Screen GUI of the product.

    And even Help File is provided with the software product which provides offline help to the End User if he/she finds any problem with the working or does not know the working of the Feature or the product as a whole.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 62

    4.4 Snap Shots

    1. GUI of Project

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 63

    2. When we select file menu we can select 3 option in that i.e. Open Local Camera, Open Video and Exit Option... We can also use shortcut menu for local camera and video.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 64

    3. When we select Help menu we get a about us option in that the description of about is in shown in next snap shot.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 65

    4. When we click on About Us it gives us the name of our project with the version no. of the project and the developer of the project with an email id so that if you have any query you can mail us.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 66

    5. When we use Dynamic Image the description about the image is given that at what position the hand is..

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 67

    6. When any video is being played it give where the position of the hand is..:

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 68

    4.5 Conclusion

    In this phase we implemented our software with all the requirements. We created the entire GUI as simple as possible so that it is easy for user to operate it. We have many colors for our screen to improve the user interaction and make it more attractive.

    Later on we carried out the testing part that comprised of testing to check the program for correct results and error freeness. We also saw which all testing techniques we used in our projects that were as follows: Performance testing, Stress testing, User training.

    The post implementation review was also carried out that evaluated whether the whole of the project was running or not and it also checked whether all the specified goals and specifications were achieved or not.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 69

    CHAPTER FOUR

    CHAPTERFIVE

    FUTURE WORK &

    REFERENCES

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 70

    5.1 FUTURE SCOPE

    Improvement could be made on McConell idea of orientation histogram. There are other approaches that could be used to perform classification using Neural Network. Example, Euclidean distance is a straight forward approach to it.

    Next improvement is making the system able to recognize more gesture signs. Since this system only able to recognize 8 types of gesture signs, small modification to the coding, the system able to recognize more than 8 type gesture signs.

    Another improvement could be made is the background of images. Since this data sets background are deliberately made it black, future improvement could develop an algorithms which could ignore the background color as background color is static.

    Last improvement is change the database image into real live video database. Real live video input could straight recognize the gesture sign without having to take picture.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 71

    5.2 REFERENCES

    5.2.1 Books & Journals

    1. Gallaudet University Press (2004). 1,000 Signs of life: basic ASL for everyday conversation. Gallaudet University Press.

    2. Don Harris, Constantine Stephanidis, Julie A Jacko (2003). Human Computer Interaction: Theory and Practice. Lawrence Erlbaum Associates.

    3. Hans-Jeorg Bullinger (1998). Human-Computer Interaction.

    4. Kim Daijin (2009). Automated Face Analysi: Emerging Technologies and Research. Ideal Group Inc (IGI).

    5. C. Nugent and J.C. Augusto (2006). Smart Homes and Beyond. IOS Press. Minsky and Paperts 1989 book, Perceptron

    6. Allianna J. Maren, Craig T. Harston. Robert M. Pap (1990). Handbook of Neural Computing Applications. Michigan: Academic Press.

    7. Sergios Theodoridis, Konstantinos Koutroumbas (2006). Pattern Recognition. 3rd edition. Academic Press.

    8. Klimis Symeonidis (2000). Hand Gesture Recognition using Neural etwork. University of Surrey.

    9. D. J. Jobson, Z. Rahman, and G. A. Woodell, A multi-scale retinex for bridging the gap

    between color images and the human observation of scenes, IEEE Trans. Image

    Process., vol. 6, no. 7, pp. 965976, Jul. 1997.

    10. D. J. Jobson, Z. Rahman, and G. A. Woodell, Properties and performance of a

    center/surround retinex, IEEE Trans. Image Process., vol. 6, no. 3, pp. 451462, Mar.

    1997.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 72

    11. J. S. Lim, Two-Dimensional Signal and Image Processing. Englewood Cliffs, NJ:

    Prentice-Hall, 1990.

    12. R. C. Gonzales and E. Woods, Digital Image Processing. Reading, MA: Addison-

    Wesley, 1992.

    13. The Colour Image Processing Handbook, S. J. Sangwine and R. E. N. Horne, Eds.

    London, U.K.: Chapman & Hall, 1998.

    14. S. Wolf, R. Ginosar, and Y. Zeevi, Spatio-chromatic image enhancement based on a

    model of humal visual information system, J. Vis. Commun. Image Represent., vol. 9,

    no. 1, pp. 2537, Mar. 1998.

    15. A.F. Lehar and R. J. Stevens, High-speed manipulation of the color chromaticity of

    digital images, IEEE Trans. Comput. Graph. Animation, pp. 3439, 1984.

    16. S.-S. Kuo and M. V. Ranganath, Real time image enhancement for both text and color

    photo images, in Proc. Int. Conf. Image Processing, Washington, DC, Oct. 2326, 1995,

    vol. I, pp. 159162.

    17. Bockstein, Color equalization method and its application to color image processing, J.

    Opt. Soc. Amer. A, vol. 3, no. 5, pp. 735737, May 1986.

  • Hand Gesture Recognition

    Shah & Anchor Kutchhi Polytechnic 73

    5. 2.2 Websites

    1. http://www.mathworks.com/access/helpdesk/help/toolbox/nnet/getting2.html#30526 (1/5/2009)