chapter ii - shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. ·...

49
Computer Recognition of Indian Sign Language 12 Chapter II The research works conducted by various authors till date at international and national levels are discussed in this chapter. As a big question arises, whether the signs in ISL are same as signs of international sign languages? To cater this question, the variations in sign languages, between international sign languages and with ISL are figured out. This indicates a new scope of research is available in ISL recognition. Due to availability of adequate research at international sign languages, a detailed summary of works conducted by various researchers at international level are presented in this chapter. At national level, a limited number of researches have been conducted. At the later part of this chapter, a brief summary on ISL computerisation is also given. The summary of this chapter is helpful in carrying research in automation of ISL. 2.1 International Sign Languages The spoken and written language of a country is different from other country. Although the same language has been used by a number of countries, however, the syntax and semantics of a language is dependent on a country/region. For example, English is the official language of the UK, the USA and many other nations. The usage of English differs at country level. Similarly, the sign language of a country is not similar than that other country. The focus of this study is on the development of sign languages at international level. As discussed in chapter I, the development of sign language is for each country is at varied with time. The sign languages listed in table 2.1 presents some of the important international sign languages. The focus of this chapter is also on literature survey in the field of sign language recognition. Therefore, only brief discussions on linguistics characteristics of BSL (British SL), ASL

Upload: others

Post on 11-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

Computer Recognition of Indian Sign Language 12

ChapterII

The research works conducted by various authors till date at international

and national levels are discussed in this chapter. As a big question arises, whether

the signs in ISL are same as signs of international sign languages? To cater this

question, the variations in sign languages, between international sign languages

and with ISL are figured out. This indicates a new scope of research is available

in ISL recognition. Due to availability of adequate research at international sign

languages, a detailed summary of works conducted by various researchers at

international level are presented in this chapter. At national level, a limited

number of researches have been conducted. At the later part of this chapter, a

brief summary on ISL computerisation is also given. The summary of this chapter

is helpful in carrying research in automation of ISL.

2.1 InternationalSignLanguagesThe spoken and written language of a country is different from other

country. Although the same language has been used by a number of countries,

however, the syntax and semantics of a language is dependent on a

country/region. For example, English is the official language of the UK, the USA

and many other nations. The usage of English differs at country level. Similarly,

the sign language of a country is not similar than that other country. The focus of

this study is on the development of sign languages at international level.

As discussed in chapter I, the development of sign language is for each

country is at varied with time. The sign languages listed in table 2.1 presents

some of the important international sign languages. The focus of this chapter is

also on literature survey in the field of sign language recognition. Therefore, only

brief discussions on linguistics characteristics of BSL (British SL), ASL

Page 2: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 13

(American SL), Auslan (Australian SL), JSL (Japanese SL), CSL (Chinese SL)

and ArSL (Arabic SL) are given in this chapter.

Table 2.1: Major International Sign LanguagesS. No. Country or Sub-continent Sign Language Abbn.

1 United Kingdom British Sign Language BSL

2 United States of America American Sign Language ASL

3 Commonwealth of Australia Australian Sign Language Auslan

4 Japan Japanese Sign Language JSL

5People's Republic of China Chinese Sign Language CSL

Taiwan Taiwanese Sign Language TSL

6

Middle-East Arabic Sign Language ArSL

Islamic Republic of Iran and

other Gulf countriesPersian Sign Language PSL

7 Republic of India Indian Sign Language ISL

8Socialist Republic of

Vietnam

Vietnamese Sign

LanguageVSL

9 Ukraine Ukrainian Sign Language UKL

10Democratic Socialist

Republic of Sri LankaSri Lankan Sign Language SLTSL

11Federative Republic of

Brazil

Brazilian Sign Language

(Lingua Brasileira de

Sinais)

Libras

12Republic of Poland

(Rzeczpospolita Polska)

Polish Sign Language

(Polski Jezyk Migowy)PJM

13The Netherlands

(Nederland)

Nederlandse Gebarentaal

or Sign Language of the

Netherlands

NGT/

SLN

2.1.1 BritishSignLanguageThe BSL [1] has a long history. The first unofficial recorded community

program was organized in sixteenth century according to the British history. The

Page 3: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 14

BSL was emerged as a standard sign language during eighteenth and nineteenth

centuries. It is worth saying that all sign languages are derived from BSL. The

year wise progresses of BSL are mentioned in table 2.2.

Table 2.2: Development of BSLS.

No.

Year Activity

1 1720 Formal documentation of BSL alphabet by Daniel Defoe .

The method of expressing alphabet is still in use.

2 1755 The first deaf public school was established by Charles-

Michel de l'Épée . He is named as

after his death.

3 1760 In Edinburgh, Thomas Braidwood founded a deaf school.

4 1783 Thomas Braidwood founded the famous Braidwood

Academy in London for Deaf and Dumb people.

Joseph Watson , after his graduation from of Thomas's

School , established a deaf school. The first recorded deaf

barrister was John Lowe , one of the Joseph's famous

graduate students.

5 1860

onwards

Oral schools were established.

Many r

as the only

method of teaching in schools.

6 1974 BSL was announced as official sign language of the UK.

7 2003 British Government acknowledged BSL as an official sign

language.

2.1.2 AmericanSignLanguageASL was legalized by [2] to be a sign

language in the year 1980. A substantial population in USL are deaf so a proper

ed Nations on Human Rights

World Federation of the deaf

Page 4: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 15

large number of universities and institutes are worked on the automation of ASL

interpretation.

2.1.3 AustralianSignLanguageThe sign language of Australia is known as Auslan [3-4]. It was brought

during 19th century from Ireland and Britain. Auslan is derived from BSL and

John Carmichael

Australia in the year 1925. In the 19th century deaf schools were established in

Frederick Rose

founded the Melbourne Deaf School. At that time most deaf schools were

residential.

Modern Auslan is different from the old Auslan in the way of finger spellings.

A number of sign language services are now available in Australia. The specific

sign language services in Auslan are secondary and tertiary education, sign

language interpreter and administration related services, medical and legal

services which are beneficial to both deaf people and interpreters. These demands

have the following responses.

(i) Attempts to regulate usage

(ii) The growth of new signs to cater new needs

(iii) The borrowing of signs from various sign languages.

2.1.4 JapaneseSignLanguageThe sign language used by deaf communities in Japan is Japanese Sign

Language [5] is belongs to a family of multifaceted 3-dimensional visual

languages. There is no standard JSL is available in Japan till today. Community

programs in Tokyo are conducted on Tokyo version of JSL. Some JSL signs are

adopted by the Korean and Taiwanese sign languages before World War II.

JSL is a "younger" sign language as compared to many sign languages. The

first deaf school established at Kyoto for the deaf individuals in 1878. No valid

proof was found on the use of sign language before 1878. The recent form of

Page 5: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 16

finger spelling was introduced in the 20th century. This figure spelling available in

JSL is based on Spain figure spelling. After the World War II mandatory

education for the deaf was enforced by the government.

2.1.5 ChineseSignlanguageThe first oral based deaf school was established by an American missionary.

However, no major impact of ASL on CSL is present [6-10]. CSL has been

standardized in the year 1950. It has several variants including Shanghai, TSL,

Hong Kong SL and Tibetan SL. The Shanghai form of CSL is used in Malaysia

and Taiwan.

According to some reports China has 21 million deaf individuals [6]. Despite

of this, CSL was banned in most parts of the country, instead, oral only education

policy was adopted. Some organizations in 1980s run a number of hearing

rehabilitation centres to aid deaf people. According to a survey [9], only about

10% of the children are able to take admissions in formal deaf schools.

2.1.6 ArabicSignLanguageA large number of individuals with hearing defects are surviving in the Arab

world [11]. Education and training are most challenging for the deaf population in

the Arab. For deaf community there is a little scope of education, because of lack

of specialized deaf schools available. At the early stage, hearing and deaf children

are capable of language learning. Deaf children are lacking in learning due to

absence of role models from whom they can learn sign language. For many Arab

deaf people, Arabic spoken/written language is secondary for them. Arab deaf

people also not participating in interactive deaf educational programs which helps

them to learn ArSL.

2.2 IndianSignLanguageThe Indian Sign Language (ISL) is a recognized sign language in India. It is

mostly influenced by BSL in the form of finger spelling system. However, it is

not influenced by European sing languages. Rarely 5% [12-15] of deaf people

attended deaf schools as reported. No formal ISL education available prior to

Page 6: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 17

1926 as stated by Banerjee [16] and concluded that different sign systems are

followed in different schools. A number of schools are then established to educate

deaf people and few of them use ISL as a medium of instructions. In these

schools, effective and proper audio visual supports are not available. The use of

ISL is limited to short term and vocational programs. Madan M Vasistha

conducted a research with more than hundred deaf schools in 1975 and concluded

that no standard ISL was used in those schools, only some kind of gestures are

used in these schools. After 20 years, it is reported that the sign languages used in

these schools are based on spoken English, regional spoken languages or express

their inability to provide s sign for every word. The teaching in these schools is

based on manual kind of communication. However, later on it is agreed that ISL

is a language of its own semantics and syntax.

2.3VariationsinIndianandInternationalSLThe research proposed in this thesis is limited to digit, single and double

handed alphabet and a limited number of words. The differences between ISL and

major sign languages used worldwide (BSL, ASL, LSF and Auslan) are discussed

in this section. This gives us a clear picture that ISL is different than other sign

languages. This indicates that research in computer recognition of ISL is viable.

From table 2.3 (Summary of figures 2.1 - 2.17) it is clear that ISL sign

gestures are clearly different from other sign languages except single handed

alphabet signs. At digit level, ISL signs from 6-9 are different than BSL, ASL,

LSF as well as from Auslan. At single handed alphabet level all 26 character

signs of ISL are not same but very similar in shape. At double handed alphabet

sign level, eight characters as indicated in table 2.3 are completely different and

other 18 character have similar shapes. The word signs of ISL are dependent on

object characteristics as in case of LSF, but due to differences in geographical

region and civilization periods, the word signs of ISL are clearly different than

other sign languages. Therefore, the distinction of ISL with other leading sign

languages of the world, demands a different automatic recognition system for

ISL. In view of above facts and in order to help the unblessed people in India, this

Page 7: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 18

research has been initiated.

Table 2.3: Differences between Sign Languages around the WorldDomain BSL ASL LSF Auslan ISLDigit Same as ASL Same as

BSLSame asASL

0-5 are sameas BSL, but6-9 aredifferentthan BSL.

0-5 are sameas BSL, but6-9 aredifferent.

SingleHandedAlphabet

Same as ASL Same asBSL

J, P, Y andZ arecontinuoussigns.Others aresame withASL

Same asASL

Same asBSL andASL.

DoubleHandedAlphabet

Same as ASL Same asBSL

? Same asBSL

A, B, E, J,T, U, V, Ware different.Others aresame asBSL

Word Dominated byfirst onehandedcharacter andfollowed byfeatures of theobject.Different thanother signlanguages.

Dominatedby objectfeatures.Differentthan othersignlanguages.

Largelydominatedby objectfeatures.Differentthan otherSL.

Dominatedby firstsinglehandedcharacterand featuresof object.Differentthan otherSL.

Largelydominatedby objectfeatures.Differentthan otherSL.

DigitSignsofSignLanguagesaroundtheWorld

Figure 2.1: ASL/BSL Digit Signs

Page 8: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 19

Figure 2.2: Auslan Digit Signs

Figure 2.3: LSF Digit Signs

Figure 2.4: ISL Digit Signs

SingleHandedAlphabetSignsaroundtheWorld

Figure 2.5: BSL Single Handed Alphabet Signs

Page 9: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 20

Figure 2.6: ASL Single Handed Alphabet Signs

Figure 2.7: LSF Single Handed Alphabet Signs

Figure 2.8: Auslan Single Handed Alphabet Signs

Page 10: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 21

Figure 2.9: ISL Single Handed Alphabet Signs

DoubleHandedAlphabetSignsaroundtheWorld

Figure 2.10: ASL/BSL Double Handed Alphabet Signs

Figure 2.11: Auslan Double Handed Alphabet Signs

Page 11: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 22

Figure 2.12: ISL Double Handed Alphabet Signs

TheWord'Computer'inDifferentSignLanguages

Figure 2.13: Frames of ASL 'Computer' Sign

Figure 2.14: Frames of BSL 'Computer' Sign

Figure 2.15: Frames of LSF 'Computer' Sign

Page 12: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 23

Figure 2.16: Frames of Auslan 'Computer' Sign

Figure 2.17: Frames of ISL 'Computer' Sign

2.4OutlineoftheLiteratureReviewAs discussed in chapter I, sign language gestures are different for each

country and also dependent on the verbal language used in that country. The

automatic recognition of sign language is dependent on the gestures. The current

status of research in the automation of sign languages of different countries is at

various in levels. The developed countries in the world are having automatic

recognition systems to aid deaf and hard hearing people. The systems developed

and installed at various public places. However, the status of research in

developing countries is in its early stage. The research facilities, data set

availability in developed countries are also available in public domains. Same

facilities in developing and under developed countries are in development stage.

In order to develop a sign language recognition system, it is necessary to learn the

current status of research at international and national levels. The detailed study

on automatic recognition of ISL is not at par with international sign languages

due to a limited research has been conducted till date. Later in this chapter, a brief

description about ISL computerization is presented. The organization of literature

survey on international sign languages is as under.

(i) The domain of the sign languages used

(ii) Data acquisition methods employed

(iii) Data transformation techniques applied

Page 13: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 24

(iv) Feature extraction methods utilized

(v) Selection of classification techniques

(vi) Results and discussions on existing research

2.4.1 ResearchDomainofSignLanguagesSign language is a language which is completely different from spoken and

written languages. Like spoken languages, it has its own set of digits, alphabets,

words, phrases and sentences. The fundamental difference is that it has a limited

vocabulary as compared to other kind of languages. In most of the developing

countries and under developed countries the sign language is in the initial phase.

The sign language development in these countries will take many years to

become an independent language. However, the automatic recognition for sign

language for these countries has been started and substantial works are reported.

A sign language has a character set which is similar as written/spoken

language. In case of BSL or ASL the character set are A to Z. Likewise, the digits

0 to 9 are used in any sign language [17-20]. Secondly, the words and phrases of

any sign language are defined in a particular domain. The intension behind the

production recognition system of sign language, a set of words or phrases in a

particular domain like railways, banking, public telephone or something that

focuses general conversations at public places are taken into account. Thirdly,

group of sign gestures for simple sentences or phrases are used in sign language

recognition systems. Some of the identified domains are shown in table 2.4. This

is due to the fact that, the computational complexity of a recognition system

increases with increasing size of the input vocabulary of the language.

In the proposed ISL recognition system, alphabet set, A - Z, digit set, 0- 9

and a limited number of ISL computer terminology words are chosen.

Page 14: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 25

Table 2.4: Examples of Sign Language DomainsSl. No. SL Domain References

1 CSL Why?, What for?, How much?, Award. [21]

2 JSL Coin, Cigarette, Flower, Reluctantly, Row,

Take, Immediately, Understand, Hate, Left,

Seven, Moon, Eight, Walk, Conscience .

[22]

3 NGT Friend, To Eat, Neighbour, To sleep, Guest, To

Drink, Gift, To wake up, Enemy, To listen,

Peace upon you, To stop talking, Welcome, To

smell, Thank you, To help, Come in, Yesterday,

Shame, To go, House, To come and I/me .

[23]

AvailabilityofStandardDataSets The standard data sets used by different researchers are available to public

through online repositories. The library data sets used by various researchers on

the automatic recognition of international sign languages are shown in table 2.5.

Five data sets are on ASL out of six data sets. One is on PJM and is available for

public use. These data sets contain desired strengths, which are necessary for

carrying a research, namely contents, criteria, construct, reliability, sensitivity and

appropriateness, objectivity, practicability, economy and interest. The detailed

collection of these data sets is explained below. Some of the data sets are helpful

to general users to learn and interaction with their children or parents who are

deaf or mute.

Table 2.5: Standard International Sign Language Data Sets

Sl. No. Library Data Set Sign language

1 Lifeprint Fingerspell Library ASL

2 American Sign Language Linguistic ResearchProject with transcription using SignStream

ASL

3 ASL Lexicon Video Data set ASL

4 eNTERFACE ASL

5 PETS 2002 PJM

6 RWTH-BOSTON-104 Database ASL

Page 15: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 26

LifePrintFingerspellLibraryAmerican Sign Language University provides [24, 25] online sign

language instructions since 1997. The program is intended to parents and relatives

of deaf children living in rural areas where a limited access to sign language

programs is available. The technical details regarding acquire of data set are not

available. However, it has a rich library having all types of data set ranging from

static alphabet complex phrases including medical terms and advanced phrases.SignStreamSignStream [26-29] is a multimedia database tool available on a non-profit

basis. It offers an environment in which one can view, annotate and analyse

digital video. It also offers an on-screen access mechanism to video and audio

files and provides very accurate and detailed annotation, which can easy the

process of linguistic research on sign languages. The database includes different

domain data that involving annotation and analysis of digital video data.

The following points are considered in acquisition process of SignStream

database:

A set of synchronized digital cameras are used to acquire simultaneous

digital video streams at 85 fps.

Four PCs (configured with 500-MHz Pentium III processor, 256 MB

RAM and 64 GB of HDD storage capacity).

Four Kodak ES-310 digital video cameras have been connected to four

PCs.

A set of Bit Flow Road Runner video capture cards are used in capturing

data set.

An Ethernet switch connected the four PCs to communicate with each

other.

synchronize video captured across the four cameras.

Various illumination sources including dark backgrounds, chairs for

Page 16: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 27

subjects, were used in capturing the videos.

One PC was designated as a server and other PCs were acted as clients.

Appropriate program was executed on server PC and corresponding

client programs run on client PCs.

In all four PCs video with an image resolution of 648×484 were captured

simultaneously.

All cameras were focused towards the signer.eNTERFACEThis data set is created for ASL with the help of a single web camera having

resolution 640×480. Eight signers are contributed with five repetitions for each

sign [30-31]. The data set is divided into 532 training set (28 samples per sign)

and 228 testing set (12 samples per sign). Seven fold cross validation techniques

are applied on training as well as testing sets. Manual and non-manual features

are extracted from these sets. Hand motion analysis is experimented with help of

centre of mass and Kalman filter. Feature vectors are extracted by using

appearance-based shape features. It also includes parameters of an ellipse

attached to binary hand data from a rectangular mask located on top of the hand.

The system was able to detect head motions (rotations and nods) for head motion

analysis [32].ASLLexiconVideoDataSetA large scale video data set, the ASL Lexicon data set [33-34], is a large

scale data set containing v large number of different sign classes. The data set is

useful for human activity analysis and sign language recognition systems. The

authors claimed that this is a public benchmark data set capable of evaluation of

various techniques. The data set is also used in a computer vision system that is

capable of extracting meaning of ASL sign automatically. The data set can be

useful for a large verity of machine learning, computer vision and database

indexing algorithms.

Four cameras are used to capture video data from four different views

Page 17: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 28

namely, two frontal views, a side view and a zoomed view of the face of the

subject signer. For frontal view, videos are captured with 30 fps, the frames

640×480 pixels. For side view, videos with 60 fps are captured with frames

640×480 pixels.

The authors in their experiments applied a motion energy technique with a

training set of 999 video sequences and 206 video sequences as testing set. Test

and training samples are performed by different signers, making the experiments

user independent.PETS2002DatasetThe features of this database on PJM [35-37] are as under.

One thousand colour images captured

Twelve hand posters are captured from nineteen signers

Simple and complex backgrounds are designed to capture the database

The sign images are interpreted as graphs

A lot of manual works are involved in placing the vertices of the graph on

sign images.

The experimental results are very accurate due to manual worksTheRWTH-BOSTON-104Databaseign Language and

Gesture Resources Boston University on ASL sentences [38-41]. Four

standard cameras are used to capture, three of them are able to capture grayscale

video sequences. The colour camera is used to capture the facial expressions of

the signer. The ASL sentence database consists of 201 annotated video streams.

The published video sequences are 30 fps with frame size of 366×312 pixels. The

RWTH-BOSTON-104 video database is pre-divided into training set with 161

ASL sentences and testing set containing 40 sentences.

Page 18: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 29

CreationofUnstructuredDataSets

Unstructured data sets are developed by a large number of researchers to use

the data sets for their own work. The data sets are classified into digit set,

alphabet set and simple or complex phrases. The sign language selection is based

on the researchers own decision. Characteristics of such data sets are described in

the following table 2.6.

Table-2.6: Unstructured Data Sets on Sign Language RecognitionSL Description Example set Ref.

ASLASL alphabet, single

digits and simple words.

3, 5, 7.

love, meet, more.[24]

CSL The Chinese SL. A-Z, ZH, CH, SH, NG [20-21]

VSLThe Latin-based

Vietnamese alphabet.

A, B, C, D, Ð, E, G, H, I, K,

L, M, N, O, P, Q, R, S, T, U,

V, X, Y .

[42]

ASL 26 alphabet [43]

UKL 85 gesturesWhy? Award, What for? ,

How much?, etc.[44]

ASLStatic and dynamic

alphabet sequences A name JOHN (J-O-H-N)[45]

ASL Alphabet [46, 49]

TSL15 different gestures from

TSL.

reluctantly, row, take,

immediately, understand,

hate, left, seven, moon, eight,

[50]

Page 19: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 30

Table-2.6: Unstructured Data Sets on Sign Language RSL Description Example set Ref.

NGT23 gestured

words/phrases.

To sleep, Guest, To Drink,

Gift, To wake up, Enemy, To

listen, Peace upon you, To

stop talking, Welcome, To

smell, Thank you, To help,

Come in, Yesterday, Shame,

To go, House, To come and

[23]

ArSL900 images of 30 hand

gestures.

tha, gayn, jim, fa, ha, qaf,

kha, kaf, dal, lam, thal, mim,

ra, nun, za, he, sin, waw,

[48]

SLTSL Sri Lankan Tamil SL.consonants, four Grantha

consonants and one special[51]

PSL Eight signs from PSL.

bowl, permission,

child, date, stop, sentence, [52]

ASLASL dynamic gestures

(SemiosiS)

Afternoon, Good Morning,

Wife, Daughter, Sister

Increase, God, Jesus,

Birthday, Elementary,

[53]

Libras Brazilian word Signs [54]

PSL 32 PSL alphabets. [17]

ASL Digits and Alphabets A-Z, 0-9 [56]

Page 20: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 31

Table-2.6: Unstructured Data Sets on Sign Language RSL Description Example set Ref.

PJM 48 PJM signs.

5 cardinal numbers: 1, 2, 4,5, and 100.7 International SignLanguage postures: Bm, Cm,. . . , Xm10 PSL signs: Aw, Bk, Bz,.,Yk4 number postures: 1z, 4z,5s, 5z.

[35]

CSL 30 CSL lettersZH, CH, SH, NG.

[55]

ASLWords, tenses, suffixesand prefixes.

A, C, D, E, G, I, J, L, M, N,O, Q, S, T, X, Y, Z, SMALL-C, I-L-HAND, BENT-V, L-1HAND;H, K, P, R, U, V;B, F, W, BENT, FLAT.

[57]

2.4.2 DataAcquisitionMethodsEmployedA video or still camera is used in the development of standard sign language

data sets. The researchers are cautious about lighting illuminations, selection of

background, dress up materials for subject signers and spectacles to capture data.

In capturing unconstrained data sets, researchers also follow same but not all

conditions. Specially designed input devices like CyberGlove [57] are expensive

also used in some of the experiments. By and large, digital still cameras are used

in most of the experiments.DigitalStillCameraFor data capturing, two digital cameras [18, 58] are used, the optical axis of

subject signer. The world coordinate system and space between camera

coordinate systems are kept parallel.

A single camera is used to capture the sign images and the sign images are

images with pixel size 80×64 is adopted. A digital camera was used to capture 30

Chinese manual alphabets [20-21]. 195 samples of each Chinese manual alphabet

Page 21: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 32

captured by the authors are used in various experiments. The hand gestures are

captured through five different views for 26 letters in the experiments by authors

[43]. The captured images are resized to 80×80 pixels, and a total of 130 signs are

stored in the data set for each letter.

In addition to a digital camera, a coloured glove is used to capture sign

images [47]. The use of coloured glove is helpful in processing sign images using

HIS colour system. From 30 distinct hand gestures, an aggregate of 900 colour

images are captured. Image segmentation process is used to divide the image into

six layer including wrist and five fingertips.

A digital still camera with uniform background is used in capturing the signs

[52]. Five subject signers, with mean age 25 years, are contributed for the process

of data capturing. 30 images per sign are captured for the experiment.

To capture a sign vocabulary containing 10 gestures with 3 connected digits

of CSL, a digital camera is used [59]. The feature extraction process extracts the

features including, circumference, area, length of X and Y axes of and for 10

gestures with an ellipse to fit gestural region and their derivatives. A total of 1200

samples are collected for ten gestures and five subjects.

32 different one handed signs from PSL are captured by using a digital

camera used in the experiments [17]. The uniform black background with varying

hand positions and distance are under consideration when data sets are captured.

192 sign images are used as training and 224 sign images are used as testing data

in the proposed experiments.VideoCameraA web camera is used to capture videos [44], a number of reference points

are identified from face and hand areas of video frames. A handshape

comparison algorithm is implemented on a set of 240 sign image frames

captured from 12 signers. Skin colour detection algorithm is used to capture the

facial expressions. The hand gestures are captured by using a video camera [46]

manually. Image frames are extracted by using a camera sensor.

Page 22: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 33

The signers are asked to wear long sleeve dark clothes with white gloves and

required to stand before dark background curtains with normal lightning

conditions [50]. By using a video camera, 15 gestures of TSL are captured. 30

frames re extracted from these signs, which consists of movement of a single

hand or with time varying hand shape. Four different kinds of hand movements

are available in the stored data set. In experiments, Fourier descriptors [60] and

Geometric Cosine Descriptor (GCD) are used as feature extraction techniques.

The sign frames are classified using a Euclidean distance classifier.

A dataset [47] containing ArSL words and phrases are captured by a digital

camcorder from 3 signers. The words and phrases collected are frequently used

among deaf and hard hearing community. 23 gestural signs with 150 repetitions

are stored in the dataset from those 3 signers without clothing restrictions. A

web camera capable of capturing 15 fps is used in the experiments on SLTSL

[51], as in Sri Lanka average speed of finger spelling is 45 letters per minute.

A video camera is used in capturing signs in a real time recognition system

in an intelligent building with AdaBoost algorithm [61]. The system is capable

of extracting lip movements and facial expressions separately from the sign

gestures. The system developed is capable of translating signs into speech.

A charged couple device camera is used to capture continuous signs [56]. A

variant of Hidden Markov Model (HMM), the two dimensional HMM is used for

recognizing the captured gestures. No special gloves are required in video

capturing process.

The database [33] contains 921 unique ASL sign classes belongs to 20

different handshapes and the experiments are performed in a user independent

mode. The testing and training samples collected are independent of each other.

The test images are normalized and resized to pixel size of 256×256 and the

selection of hand region is centred at 128×128 with a radius of 120 units. The

experimental data contains a total of 80,640 sign images and generated using

Poser5 software module.

Page 23: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 34

SpeciallydesignedinputdevicesA number of data acquisition devices are available to capture hand

movements. The captured data through these devices are precise and accurate, but

several factors are involved such as, cost, training and carrying which limits their

usages. The following are some popular devices used in data acquisition process.

(i) CyberGlove

(ii) Polhemus FASTRAK

(iii) Sensor GloveCyberGloveIt consists of four abduction sensors, two bend sensors for fingers and some

other kinds of sensors to determine palm arch, crossover of thumb, wrist

abduction and wrist flexion. The sensors present in the device are based on

linear and resistive bend sensing technologies which are useful to transform

finger and hand configurations converted and digitized to 8 bits in real time joint

angle data. The data rate of this device is 112 samples per second which can be

used as feature vectors for description of handshapes.PolhemusFASTRAKThis device [22, 62] provides a 6 Degree-of-Freedom (6DOF) tracking in

real time with no latency. It can be used in head, hand and can be useful in

graphics and cursor control, instrument tracking for biomedical analysis,

digitizing and pointing, tele-robotics, steriotaxic localization and other kind of

applications. Data accuracy and maximum reliability are features of this device

and it can be very useful in motion tracking system .

It is designed to track the position (3D coordinates) and orientation (azimuth,

elevation, roll) as it can move through space. Near zero latency capability of the

device makes the system ideal for virtual reality interfacing, simulators and other

real time applications. The acquired data from this device can be useful in

computer graphics programs. It have USB and RS-232 ports, therefore it can be

connected to any computer system easily.

Page 24: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 35

SensorGloveThe ADCL202 sensor glove [33] is a low cost and power requirement device

consists of complete two-axis accelerometers with good measurement range those

are useful in acquiring sensor data. It is equipped to measure static acceleration

and dynamic acceleration. These are salient features of the sensor glove device.

To fabricate the accelerometer surface micromachining technology isused.The outputs produced by the Sensor Glove are digital signals whoseduty cycles (pulse width/period) are proportional to the acceleration ineach of the two axes.The adjustable bandwidth of ADXL202 is in the range of 0.01 Hz - 5kHz.

Data can be collected by a series of pulses.The device is programmed in of BASIC language.

The data are then sent to a PC through serial port.

2.4.3 DataTransformationTechniquesAppliedAn image can have several reference points; these points are useful in image

analysis. The position and movement of hand are important keys to any

classification algorithm. In the research proposed by authors [18, 58], contour of

hand is chosen as a transformation method in sign language recognition. The

centre of gravity of hand is able to extract feature vector for the classification

algorithm. This centre of gravity is used to find distance from other images and is

used as a feature vector. A moving average filter to smooth the distance vector is

used to filter out noise introduced.

In the proposed transformation technique [17, 61] images with RGB colour

space are converted into grayscale and the grayscale images are converted to

binary images. Binary images are pure black and white images and the pixels are

either 0 or 1 and binary images can be produced from grayscale images by the

method of thresholding. The binary image is then used as input to the feature

extraction process.

The transformation of video frames [42] followed a sequence of instructions.

The following are the set of instructions.

Page 25: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 36

Scaling down of the video frames.

Skin colour detection by the help of inverse Phong reflection model .

Pixel averaging with rejection of minor objects.

Skin area clustering and label assignment.

The closet neighbourhood model is used for hand motion trajectory

refinement.

Colour based segmentation [18] is used to extract hand positions. To model

the geometry of hand, contour of hand is estimated. The feature vectors from the

images contain the image contour represented by elliptical Fourier coefficients.

The important step used in transformation technique is the generation of

video object plane [45]. The inter-frame change is estimated using contour

mapping with the extraction of video object plane.

The transformation technique proposed in [46] is the scene complexity. This

feature includes various parameters like, background, lightning illumination,

viewpoint and camera parameters. Due to these scene conditions, the content of

objects affected dramatically. Unwanted information from images is eliminated

by the use of median filter. Background information is eliminated by using

Gaussian average technique.

Global motion analysis [63] is used to analyse hand gesture images. The

image frames represented by closed boundary of segmented handshape. The

features are extracted by Fourier descriptor from first 25 coefficients only. The

space complexity of database is reduced due to rotational, translational and

dilation invariants.

RGB colour space segmentation [47] of video sequence of gestures is

performed before feature extraction. The mean and covariance matrix of

colour space is used as image segmentation. The picture similarities are estimated

Mahalanobis distance falls within the locus of

points of the 3-Dimensional ellipsoid, it is regarded as glove pixel. The standard

deviations of all three colour components are used as threshold and to define the

locus points.

Page 26: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 37

RGB colour images [52] are converted to grayscale images, no visual

markings or gloves are used to capture sign images. The system is capable to

handle uncovered hand images taken by a digital camera.

The colour object tracking method [53] is used as a transformation technique

to convert the video frames into HSV colour space. The tracked colour pixels are

identified and subsequently converted to binary images. All image vectors are

normalized and cropped in the pre-processing stage.

The skin regions present in sign images [56] are identified by the system and

the images are binarized by help of proper threshold value. By the application of

morphological operators, small regions from images are removed.

2.4.4 FeatureExtractionMethodsUtilizedThe feature extraction [54] approaches in image processing, extracts valuable

information present in an image. This deals with conversion of a high

dimensional data space into lower dimensional data space. The lower dimensional

data extracted from images should contain precise information which is the

representative of the actual image. The image can be reconstructed from the

lower dimensional data space. The lower dimensional data is required as input to

any classification technique as it is not feasible to process higher dimensional

data with speed and accuracy. The inputs to an automatic sign language

recognition system are either static signs (images) or dynamic signs (video

frames). In order to classify input signs in an automatic sign language recognition

system, extraction of valuable features from signs is required. The feature

extraction methods used by various researchers in the field of sign language

recognition are listed in tables 2.7-2.9.

Page 27: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 38

Table 2.7: Feature Extraction Techniques using Digital Still CameraRef. Method Description

[18] ContourThe distance vector is used to extract control points

to calculate the motion parameters.

[24,

65]

Hidden

Markov Model

Hough transformation [52] with image processing

and neural networks.

[66]Kinematic

features

Kinematic time series measurement unit is used as

features from ASL static signs.

[20,

21]

Hu moments,

Gabor

wavelets and

SIFT

Colour histogram, Hu moments, Gabor wavelets and

some interest points with SIFT features.

[58]Elliptical

FourierThe elliptical Fourier representation of images.

[48]A set of 30

features

15 entries to express the angles between the

fingertips and other 15 entries for distances between

fingertips.

[52]

HAAR

wavelets

transform

The Dynamic Time Wrapping and wavelet

coefficients.

[59]A set of 8

features

Area, Circumference and Length of two axes of an

ellipse.

[17]

2D DWT

HAAR

wavelet

Two-dimensional DWT.

Table 2.8: Feature Extraction Techniques using Specially Designed Devices

Ref. Method Description

[57]

Fourier

Analysis

Fourier analysis approach is used for periodicity

detection, Vector Quantization Principal Component

Analysis clustering method is used for trajectory

recognition.

Page 28: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 39

Table 2.9: Feature Extraction Techniques using Video CameraRef. Method Description

[67]Contour

mappingCentroids, Finite State Machine and Canny Edge.

[45]Contour

Mapping VOP

Video Object Plane to extract features from video

frames.

[46]Fourier

Descriptor

The hand gesture attributes called Point of Interest of

hands are used.

[63]PCA-Hand

features

A combination of Visual Speaker Alignment (VSA)

and Virtual Training Samples (VTS).

[50] GFD

The GFD extraction by various scaled Fourier

coefficients from the two dimensional Fourier

transform.

[47]

Image

Difference

(ID)

Elimination of the temporal dimension of the video

based gestures.

[49]Orientation

histogramThe translational invariance features.

[61] NMI The Normalized Moment of Inertia value of images.

[35] Graph parsingA set of Indexed Edge unambiguous graphs

representing hand postures.

[33]Dynamic

Time WarpingThe DTW distance measure.

[28]A set of 4

features

Projection information, Number of visible fingers with

multi-frame features, Embedded edge orientation

histograms.

[55]Local Linear

Embedding

Locally linear embedding (LLE) with PCA and

Supervised LLE.

[68] Kalman filterThe width, height and orientation parameters of an

ellipse and seven Hu moments.

Page 29: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 40

2.4.5 SelectionofClassificationTechniquesThe classification architecture [69-78] is the heart of any solution provided

in the field of pattern recognition. It provides a major degree in the design

architecture including other components in decision making process. The most

popular classifier is HMM, however a lot of other renowned classifiers available

for research. Some approaches like majority voting, which is a combination of

classifiers are also useful in decision making process. Many classifiers can be

useful in the field of sign language recognition. Pattern recognition [79-83] is the

process of classifying an unknown input to a target object. Two important

approaches are used in classification. First one is supervised classification

technique [84-87] and the second one is unsupervised classification technique.

The pattern recognition has a wide class of applications including sign language

recognition.

SupervisedClassificationThe supervised classification uses a supervised learning algorithm for

classification of objects. The learning algorithm takes input a set of training data

and corresponding labels associated with each training data. Using these training

and label data, the classifier trained its internal architecture which can predict the

labels of any testing data.

Supervised pattern recognition methods are useful in a variety of applications

like, OCR, face image recognition, face image detection, object detection and

sign language recognition.

UnsupervisedClassificationThe unsupervised learning algorithms classify an input vector by clustering

or segmentation methods. In classifying an input feature vector, the distances

from centres of all clusters from the input vector are calculated using some

distance metric. The input vector is now classified to a particular cluster whose

distance is smallest among all cluster centres with the input feature vector. Some

of the important unsupervised learning algorithms are:

K-means clustering [95-96]

Page 30: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 41

Gaussian mixture models [97-99]

Hidden Markov models

The classification techniques used by various researchers in sign language

recognition is listed in table 2.10.

Table 2.10: Classification Techniques used by Researchers

Techniques Description References

Neural Network A verity of neural network classifiers. [46, 48, 51, 52]

SVM Support Vector Machine classification [20]

HMM HMM classifier with its variants.[22, 23, 44, 56,58, 59, 67, 68,92, 93]

Fuzzy sets Fuzzy Sets with other classifiers. [42]

Tensor analysis Tensor based classification. [43, 39]

FSM and DTWFinite State Machine and DynamicTime Wrapping Algorithms.

[45]

ROVERRecogniser output voting errorreduction.

[38]

Euclideandistanceclassifier

Based on Euclidean distance metric. [50]

CAMSHIFTAlgorithm

Continuous Adaptive Mean SHIFTAlgorithm.

[49]

HSBN Hand shapes Bayesian Network [94]

Boost MapA binary classifier and boostingmethod Ada Boost for embeddingoptimization.

[33, 47]

SVR Support Vector Regression technique. [28]

VQPCAVector Quantization PrincipalComponent Analysis.

[57]

Page 31: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 42

2.4.6 ResultsandDiscussionsonExistingResearchesThe results of recognition systems are discussed in this section. This is

helpful in comparing the results of the system proposed in this work with existing

works conducted by various researchers. The results obtained from various

research papers on standard data sets are summarized in table 2.11. The

maximum recognition accuracy obtained is 94.31%. The table 2.12 shows the

results obtained from unstructured data sets. The uppermost result obtained is due

to HMM classifier and the recognition accuracy is about 100%. The results

includes the parameters like input sign language, data set size, training set, testing

set, classification methods and recognition rates.

The tables signify that neural network and variants of HMM are widely

popular by the researchers due to their recognition percentage and accuracy.

Table 2.11: Results from Standard Data SetsDataset Description

[67] ASL26130

??

??

LifeprintFingerspellLibrary

DTWStat. gesturesDyn.gestures

Featureswithout with85.77 92.8282.97 87.64

[94] ASL 419 ? ?

LexiconVideoDataset usinglinguisticannotationsfromSignStream

HSBNRanked HS

15

10152025

32.1 26.061.3 55.175.1 71.481.0 80.285.9 84.589.6 88.7

[35] PSL4848

240144

??

OwnDatabasePETS

ETPL(k)graphparsingmodel

94.3185.40

[38] ASL 201 161 40RWTH-BOSTON-104

(ROVER12.9 Word errorRate

Page 32: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 43

Table 2.12: Results from Unstructured Data SetsData Set Used

Classification methods

[24] ASL 20 200 100

ANN(feed-forward BPN)

Without Canny Threshold

With Canny Threshold (0.15)

With Canny Threshold (0.25)

77.72

91.53

92.33

[20,

21]CSL ? ? ? SVM classifier 95.0256

[22] ? 183 75% 25%

Hidden Markov Model

(Hand Position - HP, MV - Movement)

HP (0.0) and no MV

HP (1.0) and no MV

HP (0.5) and MV (0.5)

HP (0.2) and MV(0.8)

49.3

70.2

70.6

75.6

[25] SLN

262 43 43

Hidden Markov Model

Training 1

Training 2

Training 3

98.8 91.1

86.6 95.8

98.3 100

150 150

Training 1

Training 2

Training 3

93.7 64.4

58.5 90.7

93.2 92.5

262 262

Training 1

Training 2

Training 3

91.1 56.2

47.6 93.0

89.8 94.3

Page 33: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 44

Table 2.12: Results from Unstructured Data Sets (Contd...)Dataset Description

Classification methods

[42] VSL 23 ? ?

Fuzzy rule based system

With two-axis MEMS accelerator (ambig.)

After applying Vietnamese spelling rules

100

90, 79, 93

94, 90, 96

[43] 26 80% 20%

Tensor subspace analysisViewpointView 1

View 2

View 3

View 4

View 5

Mean

Gray Binary

76.9 69.2

73.1 80.8

100 92.3

92.3 92.3

92.3 88.5

86.9 84.6

[44] UKL

12

85

?

?

240

(HMM)Static Signs

P2DIDM

Image distortion, cross-shaped surrounding area

Image distortion, square around area

Pixel-by-pixel

Dynamic Gestures

94

84

74

88

91.7

[46] ASL 26 26 26

Combinational NN

Without noise immunity

With noise immunity

100

48

[50] TSL45

0450 -

Generic Cosine Descriptor

(GCD)

3D Hopfield NN GCD

Train Test

96 91

- 100

[48] ArSL 30 900 300

ANN

Elman Network

Fully Recurrent Network

89.66

95.11

Page 34: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 45

Table 2.12: Results from Unstructured Data Sets (contd...)

Data Set

Classification methods

[51] SLTSL 300 ? ?

Artificial Neural Networks

Test Results

Results for consonants

Results for Vowels

73.76

74.72

71.50

[52] PSL 8 160 80

Multi-Layer Perceptron NN

Number of hidden neurons

10

11

12

98.75

97.08

97.50

[59] CSL 10 960 240Hypothesis Comparison Guided Cross

Validation (HC-CV)88.5

[17] PSL 20 416 224 Multi-Layer Perceptron NN 94.06

[28] CSL 30 2475 1650 Local linear embedding 92.2

[57] ASL 27 9072 3888

A linear decision tree with FLD 96.1

Vector Quantization PCA

Non Periodic Signs

Periodic Signs

Total

97.30

97.00

86.80

2.5LiteratureReviewonISLA speech to sign language translation system is tailored by Suryapriya A. K.,

et al. [100]. Due to computational complexity involved in the translation system,

their work is restricted to banking and railways domain. The authors utilized the

artificial intelligence techniques for the sign language translation system. Input to

the system is Malayalam speech and the output is an animated 3D virtual

character, signing the speech. The research is helpful as communication means

for deaf people.

Balakrishnan, G. et al. [101] proposed a technique of recognizing a set of 32

Tamil letters. With five fingers in a palm, 32 combinations are possible, up

Page 35: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 46

position of a finger represent a binary digit 1 and down position of a finger

indicate a binary digit 0. This type of grouping of fingers aggregates a total of 25

different groupings. The finger positions of a subject sign is given as input to the

system. The system is then able to calculate the equivalent decimal number of

finger positions, and predicts the corresponding Tamil letter. A static data set in

the form of images with size 640×480 pixels is captured. The palm image

extraction method is used to convert RGB to grayscale images. The experimental

recognition rates are 96.87% for static method and 98.75% accuracy rate reported

for dynamic method.

Following four modules are discussed by Ghotkar, A. S. et al. [102] in the

field of ISL recognition.

Hand tracking module in which Continuously Adaptive Mean Shift

tracking (CAM-SHIFT) technique is used.

The hand segmentation in which HSV colour model and neural

network.

Feature extraction with Generic Fourier Descriptor.

The gesture recognition with Genetic Algorithm

Deora, D. and N. Bajaj [103] created a software module on an ISL

recognition system. The system is capable of interpreting 25 double handed

English alphabet and nine ISL digit signs. The subject signers required to wear

blue and red gloves in the data acquisition process. Segmentation and fingertip

algorithm are used for feature extraction and PCA algorithm classification of ISL

signs is employed. The aggregate recognition accuracy reported is 94%.

Rekha, J. et al. [104] proposed an approach to recognize ISL double handed

static and dynamic alphabet signs. 23 static ISL alphabet signs from 40 signers

were collected as training samples and 22 videos were used as testing samples.

The shape features were extracted by the method of Principle Curvature Based

Region Detector, texture features of hand were extracted by Wavelet Packet

Decomposition and features from fingers were extracted by complexity defects

algorithms. Multi class non-linear SVM, kNN and Dynamic Time Wrapping

Page 36: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 47

(DTW) were used as sign classification. The recognition rate achieved was 91.3%

for static signs and 86.3% for dynamic signs.

A system is proposed by Singha, J. and Karen Das [105] for 24 ISL

alphabets. 10 samples of each character are collected, therefore a total of 240 sign

images are available for experiments. They divided the recognition process into

four different modules, mentioned as under.

Skin filtering Conversion of RGB images to HSV colour space.

Hand cropping Wrist detection and elimination of unwanted

information.

Feature Extraction Eigen values extracted from cropped images.

Classification Eigen value with weighted Euclidean distance vector.

With the proposed technique, recognition rate achieved is 97.00%. When SL

images are tested with similar data set images, the success rate has been improved

from 87.00% to 97.00% with the use of the new Euclidean distance classifier.

Bhuyan M. K. and D. Ghosh [106] considered a SL recognition system

which is able to recognize a number of classes of sign languages in a computer

vision based setup. With high accuracy rates out of the experiments, the system

developed by authors can be useful as an ISL recognition system.

An approach for Humanoid Robot Interaction is proposed by A. Nandi et al

[107] for ISL gesture recognition. The authors claimed that the proposed

architecture can be useful for interaction with humanoid robots. The Euclidean

distance metric classifier is used the experiments. The robotics simulation

software, WEBOTS was used for simulation.

2.6 SummaryThe chapter provides a comprehensive analysis; the following conclusions are

drawn for the proposed ISL recognition:

Existing systems are primarily focused on static signs/ manual signs/

alphabet/ digits but not a combination of these.

Page 37: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 48

Standard data sets are unusable.

There is a huge demand on large vocabulary database.

Focus should be on dynamic signs and nonverbal kind of communication.

ISL recognition systems should adopt data acquiring process in real time

(not restricted to laboratory data).

Systems should be able to differentiate sign from rest of the body in

parallel.

Systems should execute the recognition task in a user convenient and

faster manner.

No stabilized ISL recognition systems are available. Considering the need,

requirement and advantages, this research makes an attempt to develop

such s recognition system that can transform a given sign into a readable

format.

References[1] http://ezinearticles.com/?A-History-of-British-Sign-

Language&id=920959 (Accessed on 01 Jul 2014)

[2] http://www.nad.org/issues/american-sign-language/position-statement-

american-sign-language-2008 (Accessed on 02 Jul 2014).

[3] Johnston, T. and A. Schembri. Australian Sign Language: An Introduction

to Sign Language Linguistics. Cambridge University Press, 2007.

[4] Johnston, T. Signs of Australia: A New Dictionary of Auslan. North

Rocks, North Rocks Press, 1998.

[5] http://www.deaflibrary.org/jsl.html. (Accessed on 08 Jul 2014)

[6] Chen, Y. Q., W. Gao, G. Fang, C. Yang and Z. Wang. CSLDS: Chinese

Sign Language Dialog System. IEEE International Workshop on Analysis

and Modeling of Faces and Gestures, 2003.

[7] http://library.thinkquest.org/11942/csl.htm (Accessed on 10 Feb 2014)

[8] http://www.cnad.org.tw/page/english.htm (Accessed on 15 Mar 2014)

Page 38: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 49

[9] Yang, J. H. and S. D. Fisher. Expressing Negation in Chinese Sign

Language. Journal on Sign Language and Linguistics. vol. 5, no. 2, 2002,

pp. 167-202.

[10] http://www.lifeprint.com/asl101/topics/chinesesignlanguage.htm

(Accessed on 10 Jul 2014)

[11] Abdel-Fattah, M. A. Arabic Sign Language: A Perspective. Journal of

Deaf Studies and Deaf Education. vol. 10, no. 2, 2005, pp. 212-221.

[12] Zeshan, U., M. M. Vasishta and M. Sethna. Implementation of Indian Sign

Language in Educational Settings. Asia Pacific Disability Rehabilitation

Journal, vol. 10, no. 1, 2005, pp. 16-40.

[13] Vasishta M. M., J. Woodward and K. Wilson. Sign Language in India:

Regional Variation within the Deaf Population. Indian Journal of Applied

Linguistics. vol. 4, no. 2, 1978, pp. 66-74.

[14] Suryapriya, A. K., S. Sumam and M. Idicula. Design and Development of

a Frame Based MT System for English-to-ISL. World Congress on Nature

& Biologically Inspired Computing, 2009, pp. 1382-1387.

[15] Zeshan, U. Sign Language of Indo-Pakistan: A Description of a Signed

Language. Philadelphia, Amsterdam: John Benjamin Publishing Co,

2000.

[16] Banerji, J. N., India International Reports of Schools for the Deaf.

Washington City: Volta Bureau, 1978, pp. 18-19.

[17] Karami, A., B. Zanj and A. K. Sarkaleh. Persian Sign Language (PSL)

Recognition Using Wavelet Transform and Neural Networks. Journal on

Expert Systems with Applications, vol. 38, no. 3, 2011, pp. 2661 2667.

[18] Rezaei, A., M. Vafadoost, S. Rezaei and Y. Shekofteh. A Feature Based

Method for Tracking 3-D Trajectory and the Orient

Hand. International Conference on Computer and Communication

Engineering. Kuala Lumpur, Malaysia, 2008, pp. 347-351.

Page 39: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 50

[19] Starner, T., J. Weaver and A. Pentland. Real-Time American Sign

Language Recognition Using Desk and Wearable Computer Based Video.

Journal on IEEE Transactions on Pattern Analysis and Machine

Intelligence, vol. 20, no. 12, 1998, pp. 1371-1375.

[20] Quan, Y. and P. Jinye. Chinese Sign Language Recognition for a Vision-

Based Multi-features Classifier. International Symposium on Computer

Science and Computational Technology, Shanghai, pp. 194-197, 2008.

[21] Quan, Y., P. Jinye and L. Yulong. Chinese Sign Language Recognition

Based on Gray-Level Co-Occurrence Matrix and Other Multi-Features

Fusion. 4th IEEE Conference Industrial Electronics and Applications,

2009, pp. 1569-1572.

[22] Maebatake, M., I. Suzuki, M. Nishida, Y. Horiuchi and S. Kuroiwa. Sign

Language Recognition Based on Position and Movement Using Multi-

Stream HMM. Second International Symposium on Universal

Communication, 2008, pp. 478-481.

[23] Grobel, K. and Assan, M.. Isolated Sign Language Recognition using

Hidden Markov Models. IEEE International Conference on Systems, Man,

and Cybernetics, Computational Cybernetics and Simulation. 1997, pp.

162-167.

[24] Munib, Q., M. Habeeb, B. Takruri and H. A. Al-Malik. American Sign

Language (ASL) Recognition Based on Hough Transform and Neural

Networks. Journal on Expert Systems with Applications, vol. 32, no. 1,

2007, pp. 24 37.

[25] Brentari, D., M. Coppola, A. Jung and S. Goldin-Meadow. Acquiring

Word Class Distinctions in American Sign Language: Evidence from

Handshape. Journal on Language Learning and Development, vol. 9, no.

2, 2013, pp. 130-150.

[26] Liu, J., B. Liu, S. Zhang, F. Yang, P. Yang, D. N. Metaxas and C. Neidle.

Non-Manual Grammatical Marker Recognition Based on Multi-scale,

Page 40: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 51

Spatio-Temporal Analysis of Head Pose and Facial Expressions. Journal

on Image and Vision Computing, vol. 32, no. 10, 2014, pp. 671-681.

[27] http://www.bu.edu/asllrp/ncslgr.html (Accessed on 12 May 2014).

[28] Tsechpenakis, G., D. Metaxas and C. Neidle. Learning-Based Dynamic

Coupling of Discrete and Continuous Trackers. Journal on Computer

Vision and Image Understanding. vol. 104, no. 2 3, 2006, pp. 140-156.

[29] Neidle, C. J.. The Syntax of American Sign Language: Functional

Categories and Hierarchical Structure. MIT Press, 2000.

[30] Benoit, A. C.. Head Nods Analysis: Interpretation of Non Verbal

Communication Gestures. International Conference on Image Processing,

Genova, Italy, 2005, pp. 425 428.

[31] Aran, O., I. Ari, A. Benoit, P. Campr, A. H. Carrillo, F. X. Fanard, L.

Akarun, A. Caplier, M. Rombaut and B. Sankur. Sign Language Tutoring

Tool. The Summer Workshop on Multimodal Interfaces, Dubrovnik,

Croatia, 2006, pp. 23 33.

[32] Aran, O.. Vision based Sign Language Recognition: Modeling and

Recognizing Isolated Signs with Manual and Non-Manual Components.

PhD Thesis, Bogaziçi University, 2008.

[33] Athitsos, V., H. Wang and A. Stefan. A Database-Based Framework for

Gesture Recognition. Journal on Personal and Ubiquitous Computing. vol.

14, no. 6, 2010, pp. 511-526.

[34] Athitsos, V., N. Neidle, S. Sclaroff, J. Nash, A. Stefan, Y. Quan and A.

Thangali. The American Sign Language Lexicon Video Dataset. IEEE

Computer Society Conference on Computer Vision and Pattern

Recognition, 2008, pp. 1-8.

[35] On the use of Graph Parsing for

Recognition of Isolated Hand Postures of Polish Sign Language. Journal

on Pattern Recognition. vol. 43, no. 6, 2010, pp. 2249-2264.

Page 41: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 52

[36] Triesch, J. and C. von der Malsburg. Classification of Hand Postures

against Complex Backgrounds using Elastic Graph Matching. Journal on

Image and Vision Computing. vol. 20, no. 13-14, 2002, pp. 937-943.

[37] Triesch, J. and C. von der Malsburg C. A System for Person-Independent

Hand Posture Recognition against Complex Backgrounds. Journal on

IEEE Transactions on Pattern Analysis and Machine Intelligence. vol. 23,

no. 12, 2001, pp. 1449-1453.

[38] Philippe, D., R. David, D. Thomas, M. Zahedi and H. Ney. Speech

Recognition Techniques for a Sign Language Recognition System. In

INTERSPEECH-2007, 2007, pp. 2513-2516.

[39] Rana, S, W. Liu, M. Lazarescu and S. Venkatesh. A Unified Tensor

Framework for Face Recognition. Journal on Pattern Recognition. vol. 42,

no. 11, 2009, pp. 2850-2862.

[40] Dreuw, P., J. Forster and H. Ney. Tracking Benchmark Databases for

Video-Based Sign Language Recognition. Trends and topics in computer

vision [LNCS]. Springer Berlin Heidelberg, 2012.

[41] ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-boston-

104/readme.info (accessed on 10 March 2014)

[42] Bui, T.D. and L. T. Nguyen, Recognizing Postures in Vietnamese Sign

Language with MEMS Accelerometers. IEEE Sensors Journal, vol. 7, no.

5, 2007, pp. 707-712.

[43] Wang, S., D. Zhang D., C. Jia, N. Zhang N., C. Zhou and L. Zhang. A

Sign Language Recognition based on Tensor. Second International

Conference on Multimedia and Information Technology. 2010, pp. 192-

195.

[44] Davydov, M. V., I. V. Nikolski and V. V. Pasichnyk. Real-time Ukrainian

Sign Language Recognition System. IEEE International Conference on

Intelligent Computing and Intelligent Systems. 2010, pp. 875-879.

[45] Rokade, U. S., D. Doye and M. Kokare. Hand Gesture Recognition using

Page 42: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 53

Object based Key Frame Selection. International Conference on Digital

Image Processing. 2009, pp. 288-291.

[46] Mekala P., Y. Gao, J. Fan and A. Davari. Real-Time Sign Language

Recognition Based on Neural Network Architecture. IEEE 43rd South-

eastern Symposium on System Theory, 2011, pp. 195-199.

[47] Shanableh, T. and K. Assaleh. Arabic Sign Language Recognition in

User-Independent Mode. International Conference on Intelligent and

Advanced Systems. 2007, pp. 597-600.

[48] Maraqa, M. and R. Abu-Zaiter. Recognition of Arabic Sign Language

(ArSL) Using Recurrent Neural Networks. First International Conference

on the Applications of Digital Information and Web Technologies. 2008,

pp. 478-481.

[49] Nadgeri, S. M., S. D. Sawarkar and A. D. Gawande. Hand Gesture

Recognition using CAMSHIFT Algorithm. 3rd International Conference on

Emerging Trends in Engineering and Technology. 2010, pp. 37-41.

[50] Pahlevanzadeh, M., M. Vafadoost and M. Shahnazi. Sign Language

Recognition. 9th International Symposium on Signal Processing and its

Applications. 2007, pp. 1-4.

[51] Vanjikumaran, S. and G. Balachandran. An Automated Vision based

Recognition System for Sri Lankan Tamil Sign Language Finger Spelling.

International Conference on Advances in ICT for Emerging Regions.

2011, pp. 39-44.

[52] Kiani, S. A., F. Poorahangaryan, B. Zanj and A. Karami. A Neural

Network Based System for Persian Sign Language Recognition. IEEE

International Conference on Signal and Image Processing Applications.

2009, pp. 145-149.

[53] Kumarage, D., S. Fernando, P. Fernando, D. Madushanka and R.

Samarasinghe. Real-Time Sign Language Gesture Recognition using Still-

Image Comparison and Motion Recognition. 6th IEEE International

Page 43: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 54

Conference on Industrial and Information Systems. 2011, pp. 169-174.

[54] Antunes, D. R., C. Guimaraes, L. S. Garcia, L. Oliveira and S. Fernande.

A Framework to Support Development of Sign Language Human-

Computer Interaction: Building Tools for Effective Information Access

and Inclusion of the Deaf. 5th International Conference on Research

Challenges in Information Science, 2011, pp. 1-12.

[55] Xiaolong, T., W. Bian, Y. Weiwei and C. Liu. A Hand Gesture

Recognition System Based on Local Linear Embedding. Journal of Visual

Languages and Computing. vol. 16, no. 5, 2005, pp. 442-454.

[56] Nguyen, D. B., S. Enokida and E. Toshiaki. Real-Time Hand Tracking

and Gesture Recognition System. IGVIP Conference. 2005, pp. 362-368.

[57] Kong, W. W. and S. Ranganath. Signing Exact English (SEE): Modeling

and Recognition. Journal on Pattern Recognition. vol. 41, no. 5, 2008, pp.

1638-1652.

[58] Rezaei, A., M. Vafadoost, S. Rezaei and A. Daliri. 3D Pose Estimation

via Elliptical Fourier Descriptors for Deformable Hand Representations.

The 2nd International Conference on Bioinformatics and Biomedical

Engineering. 2008, pp. 1871-1875.

[59] Zhou, Y., X. Yang, W. Lin, Y. Xu, and L. Xu. Hypothesis Comparison

Guided Cross Validation for Unsupervised Signer Adaptation. IEEE

International Conference on Multimedia and Expo, 2011, pp. 1-4.

[60] Bourennane, S. and C. Fossati. Comparison of Shape Descriptors for

Hand Posture Recognition in Video. Journal on Signal, Image and Video

Processing. vol. 6, no. 1, 2012, pp. 147-157.

[61] Yang, Q. and P. Jinye. Application of Improved Sign Language

Recognition and Synthesis Technology in IB. 3rd IEEE Conference on

Industrial Electronics and Applications. 2008, pp. 1629-1634.

[62]

Page 44: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 55

2002.

[63] Philippe, D. and N. Hermann. Visual Modeling and Feature Adaptation in

Sign Language Recognition. ITG Conference on Voice Communication

(SprachKommunikation), 2008, pp. 1-4.

[64] Mahmoudi, F. and M. Parviz. Visual Hand Tracking Algorithms.

International Conference on Geometric Modeling and Imaging-New

Trends, 2006, pp. 228-232.

[65] Zhou, Y. and X. Chen. Adaptive Sign Language Recognition with

Exemplar Extraction and MAP/IVFS. Journal on IEEE Signal Processing

Letters, vol. 17, no. 3, 2010, pp. 297-300.

[66] Derpanis, K. G., R. P. Wildes and J. K. Tsotsos. Definition and Recovery

of Kinematic Features for Recognition of American Sign Language

Movements. Journal on Image and Vision Computing, vol. 26, no. 12,

2008, pp. 1650-1662.

[67] Kshirsagar, K. P. and D. Doye. Object Based Key Frame Selection for

Hand Gesture Recognition. International Conference on Advances in

Recent Technologies in Communication and Computing. 2010, pp. 181-

185.

[68] Aran, O. and L. Akarun. A Multi-Class Classification Strategy for Fisher

Scores: Application to Signer Independent Sign Language Recognition.

Journal on Pattern Recognition. vol. 43, no. 5, 2010, pp. 1776-1788.

[69] Kotsiantis, S. B., I. D. Zaharakis and P. E. Pintelas. Supervised Machine

Learning: A Review of Classification Techniques. IOS Press, Amsterdam

2007.

[70] Caridakis, G., S. Asteriadis and K. Karpouzis. Non-Manual Cues in

Automatic Sign Language Recognition. Journal on Personal and

Ubiquitous Computing, vol. 18, no. 1, 2014, pp. 37-46.

[71] Lichtenauer, J. F., E. A. Hendriks and M. J. T. Reinders. Sign Language

Recognition by Combining Statistical DTW and Independent

Page 45: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 56

Classification. Journal on IEEE Transactions on Pattern Analysis and

Machine Intelligence, vol. 30, no. 11, 2008, pp. 2040-2046.

[72] Vamplew, P. and A. Adams. Recognition of Sign Language Gestures

using Neural Networks. Australian Journal of Intelligent Information

Processing Systems, vol. 5, no. 2, 1998, pp. 94-102.

[73] Dreuw, P., D. Rybach, T. Deselaers, M. Zahedi and H. Ney. Speech

Recognition Techniques for a Sign Language Recognition System. HAND

Journal (Springer), vol. 60, 2007, pp. 1-80.

[74] Chen, C., L. Pau and P. S. Wang. Handbook of Pattern Recognition and

Computer Vision. Imperial College Press, 2010.

[75] Zieren, J. and K. F. Kraiss. Robust Person-Independent Visual Sign

Language Recognition. Journal on Pattern Recognition and Image

Analysis, 2005, pp. 520-528.

[76] Manning, C. D., P. Raghavan and H. Schütze. Introduction to Information

Retrieval. Cambridge University Press, 2008.

[77] Jatana, N. and K. Sharma. Bayesian Spam Classification: Time Efficient

Radix Encoded Fragmented Database Approach. International

Conference on Computing for Sustainable Global Development, 2014, pp.

939-942.

[78] Sayeed, F., M. Hanmandlu and A.Q. Ansari, Face Recognition using

Segmental Euclidean Distance. Defence Science Journal, vol. 61, no. 5,

2011, pp. 431-442.

[79] Meyer-Baese, A. and V. J. Schmid. Pattern Recognition and Signal

Analysis in Medical Imaging. Elsevier, 2014.

[80] Qi, Z., Y. Tian and Y. Shi. Robust Twin Support Vector Machine for

Pattern Classification. Journal on Pattern Recognition. vol. 46, no. 1,

2013, pp. 305-316.

[81] Tanvir Parvez, M. and S. A. Mahmoud. Arabic Handwriting Recognition

Page 46: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 57

Using Structural and Syntactic Pattern Attributes. Journal on Pattern

Recognition. vol. 46, no. 1, 2013, pp. 141-154.

[82] Liu, C., F. Yin, D. H. Wang and Q. F. Wang. Online and Offline

Handwritten Chinese Character Recognition: Benchmarking on New

Databases. Journal on Pattern Recognition, vol. 46, no. 1, 2013, pp. 155-

162.

[83] Saba, T. and A. Rehman. Effects of Artificially Intelligent Tools on

Pattern Recognition. International Journal of Machine Learning and

Cybernetics, vol. 4, no. 2, 2013, pp. 155-162.

[84] Richards, J. A.. Supervised Classification Techniques. Journal on Remote

Sensing Digital Image Analysis, 2013, pp. 247-318.

[85] Garcia, S., J. Luengo, J. A. Sáez, V. López and F. Herrera. A Survey of

Discretization Techniques: Taxonomy and Empirical Analysis in

Supervised Learning. IEEE Transactions on Knowledge and Data

Engineering, vol. 25, no. 4, 2013, pp. 734-750.

[86] Dópido, I., J. Li, P. R. Marpu, A. Plaza, J. M. Bioucas-Dias and J. A.

Benediktsson. Semi-Supervised Self-Learning for Hyperspectral Image

Classification. Journal on IEEE Transactions Geoscience Remote Sensing

Letters, vol. 51, no. 7, 2013, pp. 4032-4044.

[87] Carrizosa, E. and D. R. Morales. Supervised Classification and

Mathematical Optimization. Journal on Computers & Operations

Research, vol. 40, no. 1, 2013, pp. 150-165.

[88] Villa, A., J. Chanussot, J. A. Benediktsson, C. Jutten and R. Dambreville.

Unsupervised Methods for the Classification of Hyperspectral Images

with Low Spatial Resolution. Journal on Pattern Recognition, vol. 46, no.

6, 2013, pp. 1556-1568.

[89] Dabboor, M., J. Yackel, M. Hossain and A. Braun. Comparing Matrix

Distance Measures for Unsupervised POLSAR Data Classification of Sea

Ice Based on Agglomerative Clustering. International Journal of Remote

Page 47: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 58

Sensing, vol. 34, no. 4, 2013, pp. 1492-1505.

[90] Feldman, R.. Techniques and Applications for Sentiment Analysis.

Communications of the ACM (Journal), vol. 56, no. 4, 2013, pp. 82-89.

[91] Asmala, A. and Q. Shaun. Comparative Analysis of Supervised and

Unsupervised Classification on Multispectral Data. Applied

Mathematical Sciences, vol. 7, no. 74, 2013, pp. 3681-3694.

[92] Rousan, M. A., K. Assaleh and A. Tala. Video-Based Signer-Independent

Arabic Sign Language Recognition Using Hidden Markov Models.

Journal on Applied Soft Computing, vol. 9, no. 3, 2009, pp. 990 999.

[93] Holden, E., G. Lee and R. Owens. Australian Sign Language Recognition.

Journal on Machine Vision and Applications, vol. 16, no. 5, 2005, pp.

312 320.

[94] Thangali, A., J. P. Nash, S. Sclaroff and C. Neidle. Exploiting

Phonological Constraints for Handshape Inference in ASL Video. IEEE

Conference on Computer Vision and Pattern Recognition, 2011, pp. 521-

528.

[95] Polczynski, M. and M. Polczynski. Using the k-Means Clustering

Algorithm to Classify Features for Choropleth Maps. Cartographica: The

International Journal for Geographic Information and Geovisualization,

vol. 49, no. 1, 2014, pp. 69-75.

[96] Zhang, Y. and E. Cheng. An Optimized Method for Selection of the Initial

Centers of k-Means Clustering. Integrated Uncertainty in Knowledge

Modelling and Decision Making, Springer Berlin Heidelberg, 2013.

[97] Quast, K. and A. Kaup. Shape Adaptive Mean Shift Object Tracking

Using Gaussian Mixture Models. Analysis, Retrieval and Delivery of

Multimedia Content, 2013, Springer New York, 2013.

[98] Pathak, M. A. and B. Raj. Privacy-Preserving Speaker Verification and

Identification using Gaussian Mixture Models. Journal on IEEE

Transactions on Audio, Speech and Language Processing, vol. 21, no. 2,

Page 48: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 59

2013, pp. 397-406.

[99] Lee, C., S. Hsu, J. Shih and C. Chou. Continuous Birdsong Recognition

Using Gaussian Mixture Modeling of Image Shape Features. Journal on

IEEE Transactions on Multimedia, vol. 15, no. 2, 2013, pp. 454-464.

[100] Suryapriya, A. K., S. Sumam and M. Idicula. Design and Development of

a Frame Based MT System for English-to-ISL. World Congress on Nature

& Biologically Inspired Computing, 2009, pp. 1382-1387.

[101] Balakrishna, G and P. S. Rajam. Recognition of Tamil Sign Language

Alphabet using Image Processing to Aid Deaf-Dumb People. International

Conference on Communication Technology and System Design, 2011, pp.

861-868.

[102] Ghotkar, A. S., S. Khatal, S. Khupase, S. Asati and M. Hadap. Hand

Gesture Recognition for Indian Sign Language. International Conference

on Computer Communication and Informatics, 2012, pp. 1-4.

[103] Deora, D. and N. Bajaj. Indian Sign Language Recognition. 1st

International Conference on Emerging Technology Trends in Electronics

Communication and Networking, 2012, pp. 1-5.

[104] Rekha, J., J. Bhattacharya and S. Majumder. Shape, Texture and Local

Movement Hand Gesture Features for Indian Sign Language Recognition.

3rd International Conference on Trendz in Information Sciences and

Computing, 2011, pp. 30-35.

[105] Singha, J. and K. Das. Indian Sign Language Recognition using Eigen

Value Weighted Euclidean Distance Based Classification Technique.

International Journal of Advanced Computer Science & Applications, vol.

4, no. 2, 2013, pp. 188-195.

[106] Bhuyan, M. K., D. Ghosh and P. K. Bora. A Framework for Hand Gesture

Recognition with Applications to Sign Language, IEEE Annual India

Conference, 2006, Pages 1-6.

[107] Nandy, A., S. Mondal, J. S. Prasad, P. Chakraborty and G. C. Nandi.

Page 49: Chapter II - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/88396/11/11... · 2018. 7. 4. · Computer Recognition of Indian Sign Language 17 1926 as stated by Banerjee [16]

ChapterIILiteratureReview

Computer Recognition of Indian Sign Language 60

Recognizing & Interpreting Indian Sign Language Gesture for Human

Robot Interaction. International Conference on Computer &

Communication Technology, 2010.

***************