[ieee 2006 10th international workshop on cellular neural networks and their applications -...

6
2006 10th International Workshop on Cellular Neural Networks and Their Applications, Istanbul, Turkey, 28-30 August 2006 Vein Feature Extraction Using DT-CNNs Suleyman Malki, Yu Fuqiang, Lambert Spaanenburg Department of Information Technology, Lund University, Sweden e-mails: su1e an(&,it.lhse sx°5fy9'&,student.lthse lert(&,it.Ith.s Abstract Biometric identification is an important security application that requires non-intrusive capture and real-time processing. Security systems based on fingerprints and retina patterns have been widely developed, but can be easily falsified. Recently, identification by vein patterns has been suggested as a promising alternative. In this paper an existing feature extraction algorithm, that has been developed for fingerprint recognition, is adapted for vein recognition. The algorithm has been implemented as Cellular Neural Network and realized on a Field-Programmable Gate-Array. The detection quality is comparable to the 99.45% reached earlier by direct image comparison, but suffers from the image resolution sensitivity of the False Feature Elimination. Index Terms Biometrics Identification Systems, Discrete- Time Cellular Neural Networks, Field-Programmable Gate- Arrays, Vein Feature Extraction. I. INTRODUCTION M\ /FODERN security systems have to provide fast, accurate and robust personal identification, which implies moving away from traditional and unreliable methods such as PIN codes and smart cards. The use of electronically stored records of human biometrics features seems promising. The US Department of Defense started an experiment to replace existing ID-badges for 4.3 million employees by using fingerprint readers from Precise Biometrics already in 2001 [1]. Recently, some European states have accepted a biometric signature as legally binding, and the UK government has placed in November 2005 biometrics identification technology on the short list of its Science and Innovation Strategy [2]. As the identification process is based on the unique patterns of the users, biometrics technologies are expected to provide highly secure authentication systems. However, the existing systems are very vulnerable. One's fingerprints are accessible as soon as the person touches a surface, while a high resolution camera easily captures the retina pattern. Thus, both patterns can easily be "stolen" and forged [2]. Beside, technical considerations decrease the usability for these methods. Due to the direct contact with the finger, the sensor gets dirty, which decreases the authentication success ratio. Aligning the eye with a camera to capture the retina pattern gives an uncomfortable feeling. On the other hand, vein patterns of either a palm of the hand or a single finger offer stable, unique and repeatable biometrics features. Already in 2001, an experiment was reported where hand vein images were recognized with 99.45% success [3]. Images were cleaned and compared within 150 msec. The main bottleneck was the cost and performance of the sensor. Meanwhile Fujitsu has built a biometric palm vein scanner, while Hitachi presents a finger vein identification system [4]. In both cases, a thermal imager acquires vein images. Near- infrared rays generated by means of LEDs penetrate the hand and are absorbed by the hemoglobin in the blood. Thus, the veins (where the blood flows) appear as dark areas in an image taken by a CCD camera (Fig. l.b). Then image processing reconstructs a hand-vein pattern from the camera image. Finally, appropriate processing extracts the vein patterns from the images and performs a feature matching against reference images. (a) (b) (c) Fig. 1 Typical biometric patterns; (a) fingerprint, (b) hand vein [5] and (c) retinal angiograph [6]. For realistic image databases, the many images require more pixels to be discriminated, leading to a more than quadratic increasing search time. One may conclude that image comparison is accurate [3], given a repeatable capture mechanism [4], but the number of images to be compared is simply too large. One way to ease the problem is by providing a content-based selection mechanism. The automatic provision of such 'features' allows determining the small number of images to search through. For this purpose the reduction of False Acceptance Rate (FAR) will be dominant. In [6], it has been concluded that a Gaussian model for feature extraction is fairly successful; here we check the quality of a CNN-based feature extraction that has previously been demonstrated by Gao for fingerprints [7]. Cellular Neural Networks are known to be powerful image processing systems that recently found many digital realizations [8]. Thus, CNNs are very promising for biometrics identification systems, especially as they characteristically provide a fast execution of complex nonlinear signal processing at low power consumption. This paper will go in phases through the feature extraction algorithm of Gao, originally stated for fingerprints while making modifications for handling veins. The preprocessing 1-4244-0640-41061$20.00 ©2006 IEEE

Upload: lambert

Post on 08-Dec-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2006 10th International Workshop on Cellular Neural Networks and Their Applications - Istanbul, Turkey (2006.08.28-2006.08.30)] 2006 10th International Workshop on Cellular Neural

2006 10th International Workshop on Cellular Neural Networks and Their Applications, Istanbul, Turkey, 28-30 August 2006

Vein Feature Extraction Using DT-CNNsSuleyman Malki, Yu Fuqiang, Lambert SpaanenburgDepartment of Information Technology, Lund University, Sweden

e-mails: su1e an(&,it.lhse sx°5fy9'&,student.lthse lert(&,it.Ith.s

Abstract Biometric identification is an important securityapplication that requires non-intrusive capture and real-timeprocessing. Security systems based on fingerprints and retinapatterns have been widely developed, but can be easily falsified.Recently, identification by vein patterns has been suggested as apromising alternative. In this paper an existing featureextraction algorithm, that has been developed for fingerprintrecognition, is adapted for vein recognition. The algorithm hasbeen implemented as Cellular Neural Network and realized on aField-Programmable Gate-Array. The detection quality iscomparable to the 99.45% reached earlier by direct imagecomparison, but suffers from the image resolution sensitivity ofthe False Feature Elimination.

Index Terms Biometrics Identification Systems, Discrete-Time Cellular Neural Networks, Field-Programmable Gate-Arrays, Vein Feature Extraction.

I. INTRODUCTION

M\/FODERN security systems have to provide fast, accurateand robust personal identification, which implies

moving away from traditional and unreliable methods such asPIN codes and smart cards. The use of electronically storedrecords of human biometrics features seems promising. TheUS Department of Defense started an experiment to replaceexisting ID-badges for 4.3 million employees by usingfingerprint readers from Precise Biometrics already in 2001[1]. Recently, some European states have accepted abiometric signature as legally binding, and the UKgovernment has placed in November 2005 biometricsidentification technology on the short list of its Science andInnovation Strategy [2].As the identification process is based on the unique

patterns of the users, biometrics technologies are expected toprovide highly secure authentication systems. However, theexisting systems are very vulnerable. One's fingerprints areaccessible as soon as the person touches a surface, while ahigh resolution camera easily captures the retina pattern.Thus, both patterns can easily be "stolen" and forged [2].Beside, technical considerations decrease the usability forthese methods. Due to the direct contact with the finger, thesensor gets dirty, which decreases the authentication successratio. Aligning the eye with a camera to capture the retinapattern gives an uncomfortable feeling. On the other hand,vein patterns of either a palm of the hand or a single fingeroffer stable, unique and repeatable biometrics features.

Already in 2001, an experiment was reported where handvein images were recognized with 99.45% success [3].

Images were cleaned and compared within 150 msec. Themain bottleneck was the cost and performance of the sensor.Meanwhile Fujitsu has built a biometric palm vein scanner,while Hitachi presents a finger vein identification system [4].In both cases, a thermal imager acquires vein images. Near-infrared rays generated by means of LEDs penetrate the handand are absorbed by the hemoglobin in the blood. Thus, theveins (where the blood flows) appear as dark areas in animage taken by a CCD camera (Fig. l.b). Then imageprocessing reconstructs a hand-vein pattern from the cameraimage. Finally, appropriate processing extracts the veinpatterns from the images and performs a feature matchingagainst reference images.

(a) (b) (c)Fig. 1 Typical biometric patterns; (a) fingerprint, (b) hand vein [5] and (c)

retinal angiograph [6].

For realistic image databases, the many images requiremore pixels to be discriminated, leading to a more thanquadratic increasing search time. One may conclude thatimage comparison is accurate [3], given a repeatable capturemechanism [4], but the number of images to be compared issimply too large. One way to ease the problem is byproviding a content-based selection mechanism. Theautomatic provision of such 'features' allows determining thesmall number of images to search through. For this purposethe reduction of False Acceptance Rate (FAR) will bedominant.

In [6], it has been concluded that a Gaussian model forfeature extraction is fairly successful; here we check thequality of a CNN-based feature extraction that has previouslybeen demonstrated by Gao for fingerprints [7]. CellularNeural Networks are known to be powerful image processingsystems that recently found many digital realizations [8].Thus, CNNs are very promising for biometrics identificationsystems, especially as they characteristically provide a fastexecution of complex nonlinear signal processing at lowpower consumption.

This paper will go in phases through the feature extractionalgorithm of Gao, originally stated for fingerprints whilemaking modifications for handling veins. The preprocessing

1-4244-0640-41061$20.00 ©2006 IEEE

Page 2: [IEEE 2006 10th International Workshop on Cellular Neural Networks and Their Applications - Istanbul, Turkey (2006.08.28-2006.08.30)] 2006 10th International Workshop on Cellular Neural

[9] is handled in section II, and the extraction [7] & matching[10] in section III. Subsequently we discuss the quality of thevein feature extraction and give some details on anexperimental realization. Finally, section VI provides someconclusions.

II. IMAGE PREPROCESSING

Normally, the captured vein pattern is gray-scale andsubject to noise. Noise Reduction and Contrast Enhancementare crucial to ensure the quality of the subsequent steps offeature extraction [11]. This is achieved by means of threeoperations: Binarization that transforms the gray-scale patterninto a black and white image, Skeletonization that reduces thewidth of lines to one pixel and finally Isolated Pixel Removalthat eliminates the unwanted isolated points. These three stepsconstitute the procedure of image preprocessing (Fig. 2).

The unwanted isolated pixels are removed by applying thetemplate of Isolated Pixel Removal given in formula (1). Theinitial output y(O) equals 0 and the input is the line-thinnedimage.

Isolated PixelRemoval A=O, B= 1 8 1, i=-l (1)

III. FEATURE EXTRACTION

Blood vessels are characterized by means of length,thickness, shape and distribution of the veins. Only the lengthand distribution are taken into consideration as this enables afeasible matching of the overall pattern. As the operation ofskeletonization masks out the shape and thickness, thethinned vein pattern has, similar to fingerprints, two mainfeatures: ending and bifurcation (Fig. 3). The former is theend point of a thinned line, which reflects the length of theveins, while the latter is the cross section of three lines, whichreveals the distribution of the veins.

Fig. 2 Image Preprocessing

The algorithm of skeletonization performs iteratively in 8subsequent steps, where each step peels one layer of pixels ina certain direction. As one iteration is accomplished, thepattern is one pixel thinner in all directions. The algorithmstops when no difference between the input and the output isobtained. In Table 1, the templates used for the 8 steps aregiven. Upon start, the original image is fed as input u, and theinitial output y(O) equals 0, while the intermediate resultsconstitute the input of the successive steps.

Table 1. DIFFERENT SKELETONIZATION TEMPLATES CORRESPONDING TO THEDIRECTION OF "PEELING". THE ENTRIES OF ALL FEEDBACK TEMPLATES A

ARE SET TO ZERO, WHILE THE BIAS I EQUALS -3 FOR ALL STEPS.

Direction B Direction B

North 0

-0.5

1 I

7 0-1 -0.5

Northeast -1 7 1

wO0 -1I O,

South

Southeast

-0.50

1

1

7

1

0

1

0

1

7

1

-0.5'0

1 )

1)

Northwest

Southwest

East

I

0

7 -1I-1 0

1 0 -0.51 7 -1

0 -:0.5'OLI

1l

-0.5

-0.5

0

70

7 -I

I 0

1

1)

Endinigs BifurcationsFig. 3 Vein features: endings and bifurcations

It is important to point out the existence of false featuresdue to the noise in the original image and artifacts that maybe introduced during the procedure of image preprocessing.As two false features are normally close to each other, theyare handled in pairs. Actually, three different types exist: apair with two false endings, a pair with two false bifurcationsand a pair with one false ending and one false bifurcation [7].Fig. 4 depicts one of the cases that may arise duringbifurcation detection.

mu.. Skeletonization mu Bifurc. Detect.) [3

FalseZoom in feature

Fig. 4 Bifurcation detection may give rise to false features.

The algorithm consists mainly of 4 different operations(Fig. 5). First of all both bifurcations and endings in thepreprocessed image are detected. This can be carried out inparallel. The intermediate results are added together by meansof a simple Logical OR operation [12]. In order to remove allpairs of false features the operation of False FeatureElimination is applied. Furthermore, two new bifurcation andending images are created by subtracting the false featuresfrom the images originating from the previous steps ofbifurcation and ending detection. This is simply achieved byapplying the operation of Logical AND [12]. These new

11

i~

11

11

Page 3: [IEEE 2006 10th International Workshop on Cellular Neural Networks and Their Applications - Istanbul, Turkey (2006.08.28-2006.08.30)] 2006 10th International Workshop on Cellular Neural

images are target of the final operation, FigureReconstruction, where two instances of the operation areapplied in parallel. The final result consists of two imagescontaining the placement and direction of endings andbifurcations.

cDc)

a)0

0~

Bifurcation 1-_Detection

mmr

bifurcation detection is depicted in Fig. 7.

F Isolated Point

Logic ORExtractionJunction Point |

Extraction r Junction Point OR0_Extraction in LF

T- and Corner-forms

Fig. 7 Bifurcation detection uses three different templates in addition to aLogic OR operation

Junction PointExtraction

-_. T- and Corner-forms

A=O, B= 1 6 1 , i=

A=O, B= 1 4 1 , i

0 1 0)

-3 (3)

-3 (4)

Fig. 5 Block diagram of the vein feature extraction

The end of a thinned line has only one black pixel withinits neighborhood. As all isolated pixels are already removedduring the preprocessing, ending points are easily extractedby applying the template of Ending Detection (2) once. Theinput image u is the preprocessed picture, while initial outputvalues, y(O), are set to zero.

EndingDetection A=O, B= -1 2 -1|, i=-7

t-1 -1 -1)Similarly, Bifurcation Detection extracts all points that

have at least 3 black pixels within the neighborhood. Threedifferent types of junctions do exist: "real" points, T- andCorner-forms (Fig. 6). Extracting real bifurcations from theT- and Corner-forms needs further treatment. We follow theapproach introduced in [7] and use the template of JunctionPoint Extraction (3) that extracts the real junction points butkeeps the T- and Corner-forms. Once again, the initial outputvalues, y(O), are set zero.

[* X XU >

(a) (b) (c)Fig. 6 Different types of Junction Points: regular bifurcation (a), T-form

(b) and Corner-form (c)

Junction points in T- and Corner-forms are extracted bymeans of template (4), which removes all real bifurcationsthat have been detected using (3). The template of IsolatedPoint Extraction (5) is applied in parallel and the result isadded to the outcome of (4) by means of a Logical ORoperation [12]. Obviously, the initial output values equal zerohere as well. The order of operation in the procedure of

Isolated Point A= O, B= -1 I -1, iExtraction 1 -1) -8 (5)

As mentioned before, we employ the approach discussed in[7] to eliminate false endings and bifurcations. In order toremove all false points that are separated by a distance d< n,it performs the dilation and erosion operations for n/2iterations each. The dilation operation connects all features,with distance d< n in between, together. As conventionalerosion operation will bring the disconnected objects in thedilated image back to the original size, the erosion has to beapplied in two diagonal directions. Thus the templates"Erosion V" and "Erosion /" are employed. The former,Erosion \, erodes all pixels inserted in the dilated imageexcept those belonging to the center of diagonal lines withdirection "V". Erosion / works similarly for all diagonal lineswith direction "/". The block diagram in Fig. 8 shows thesequence of the different operations. The applied templates ofDilation, Erosion / and Erosion \ are given in (6), (7) and (8)respectively.

Dilation

Erosion /

A=O, B= I IJ, i

t1 I 1)8 (6)

A=O, B= 0 1 0 , i=-2

K1 0 0)(7)

'1 0 0Erosion \ A =O B= 0 1 0,i =-2 (8)

0O 0 1The fact that two false features are usually close to each

other [7], implies the use of a low value of n. Actually,experiments show that n=2 is sufficient in our case. Thus, one

p

Page 4: [IEEE 2006 10th International Workshop on Cellular Neural Networks and Their Applications - Istanbul, Turkey (2006.08.28-2006.08.30)] 2006 10th International Workshop on Cellular Neural

iteration is enough for each of the operations, which explainsall feedback templates being equal to zero. Consequently, thevalues of initial output are all set to zero as well.

Dilationn/2

Erosion \ Erosion In/2 n/2

Lvg IsolatedLogic Point >OR

Extraction

Fig. 8 Operations involved in False Feature Elimination. Number ofiterations, n/2, depends on the distance, n, between two false features.

So far, the extracted bifurcations and endings arerepresented as single points. Thus, only the location of everyending and bifurcation is obtained so far. In order to performthe procedure of Feature Matching, the direction of eachfeature needs to be known. The template of FigureReconstruction (9) takes the original image as input and theintermediate image (with the extracted feature) as initialoutput in order to reconstruct the feature to the limit thatmakes it comparable. The number of iterations determines thenumber of pixels that are restored of the three lines leaving abifurcation and the only line leaving an ending.

Figure A = 0 8 0 , B = 1I (9)Reconstruction A O9JB )

0 O O 91 1 1

IV. ANALYSIS AND VERIFICATION

MATLAB has been widely adopted in academic andindustrial communities as an interactive software packet thatuses matrices to perform heavy calculations. The results arepresented using advanced graphic. Thus, MATLAB providesan easy-to-use and feasible environment, on whichverification of the aforementioned algorithm is carried out.

(a) (b)Fig. 9 Original image containing vein pattern (a) and a black and white

image after binarization (b).

We start with the image in Fig. 9.a, where a pattern ofveins is captured. Applying the first operation ofpreprocessing, i.e. Binarization, yields in a black and whiteimage (Fig. 9.b). The binary image serves as input to the

sequence of Skeletonization templates (Table 1) that isapplied iteratively 7 times to get the line-thinned imageshown in Fig. 10.a. As this image undergoes the operation ofIsolated Pixel Removal (1), all unwanted isolated points areremoved, which is depicted in Fig. lO.b.

a N y II X N I(a) (b)

Fig. 10 Result of skeletonization (a) and Isolated Pixel Removal (b)

As the stage of preprocessing is accomplished, we move onto the first stage of feature extraction. Ending Detectionproduces the image shown in Fig. 1 .a, while Fig. 1 .b isobtained by means of Bifurcation Detection. The subsequentstages from eliminating false feature in the ORed image to thereconstruction of bifurcations and endings result in theimages shown in Fig. 12.

,,

(a) (b)Fig. 11 Endings (a) and bifurcations (b).

(a) (b)

(c)(dFig. 12 Adding the images with ending and bifurcation points by applying

the operation of Logical OR (a) before eliminating the false features (b).Reconstruction of endings (c) and bifurcations (d).

I '--l \lll.,.., ., \

N,N,

y %.

11

Page 5: [IEEE 2006 10th International Workshop on Cellular Neural Networks and Their Applications - Istanbul, Turkey (2006.08.28-2006.08.30)] 2006 10th International Workshop on Cellular Neural

V. EXPERIMENTAL SET-UP

The feature of local connectivity gives DT-CNNs [13] afirst-hand advantage to VLSI implementation with very highspeed and complexity. A fully digital implementation relieson the field-programmable gate-array (FPGA) for reasonslike explicit parallelism and reconfigurability. Grey-levelimages are supported, where each pixel has a value in theinterval [-1,1]. Both pixels and template-entries arerepresented with 8-bits fixed-point signed values. However,different accuracy requirements impose different placementsof the decimal comma: 1-bit integer part and 7-bits decimalpart are used for pixel values while template-entry values arerepresented with a 4-bits integer part and 4-bits decimal part.

The major problem for the implementation of a CNN on aspatial systolic architecture using an FPGA is to define asuitable geometry.

x1(k) = Payd(k)+ Ybc ud+icdeNr (c) deNr (c)

The need of multiplication by 8 in (9) is resolved by a simple3-bits left-shift. Thus, the computational stage of the nodaloperation is brought down to 10 clock cycles only instead of19 cycles originally.The design is hosted on a Virtex II Pro P30 FPGA from

Xilinx, which is installed on a development board fromMemec that provides 4 external SDRAM memories with asize of 32 MB. Additionally, the board is equipped with bothserial and parallel communication ports, allowing fordifferent communication schemes with a PC. Due to thepresence of the PowerPCs, only 78 nodes are realized. Thus,78 pairs of Multiplier/RAM out of 136 are used. Theutilization of the logic shows to be 64% of the availableslices, which opens for accommodating additionalfunctionality. The design runs on a clock frequency of 100MHz.

(10)

The CNN nodal equation (10) assumes all data to bepresent simultaneously, which would be next to impossible toprovide on the limited wiring of the FPGA. For theapplication described in this paper we use a Network-on-Chip-based CNN implementation, dubbed Caballero [8]. Itemploys the approach of switched broadcasting, wherepackets are transmitted among cells, within a certainneighborhood, using a predefined communication pattern.The scheme employed in Caballero groups the cells of theentire CNN grid into subgroups of 5 cells each (Fig. 13.a). Asthe algorithm of feature extraction is based on 1-neighborhood templates, the packets of a current cell c arecompletely distributed in the neighborhood after two steps(Fig. 13.b). Thus, the internal router in each cell has twophases. During the first one, the own packet is transmitted tothe orthogonal neighbors; while in the second phase eventualforwarding of received packets to one of the orthogonalneighbors takes place. Next to the router, a node contains amultiplier, a BlockSelect RAM and additional logic foraccumulating and control.

2

2BAD

E 2

2

(a) (b)Fig. 13 (a) Labelling of nodes belonging to the same group (b) Scheme of

transmission

All templates introduced previously are preloaded in theBlockSelect RAM internally in each node, which also servesas temporary storage for the intermediate outputs. As thefeedback coefficients in all templates, except (9), equal zero,the contribution of y-values in the calculation of the stateequation (10) is removed. The accumulator is initialized withthe value of the bias iC, whereas the subsequent controlcontributions performed on the multiplier are accumulated.

LED

Fig. 14 FPGA test set-up

The experiment is kept simple by removing the need forinteraction with MATLAB. The original image and theMATLAB result are included in the programming file as BS-RAM content. Then the CNN will work on the image and theresult is compared with the stored MATLAB result. If theseresults are in agreement, a LED on the Memec board islighted (Fig. 14).

VI. DISCUSSION

This paper illustrates that the algorithms described in [9]-[10] can also be used for vein identification. In comparisonwith the algorithm presented in [7], the operation of FalseFeature Elimination is applied only once instead of 3 times.Furthermore, we simplify the design by restricting the numberof iterations to 1 for all used templates. This allows forcomparison with solutions, based on layers of feed-forwardnetworks [14]. Where this paper performs detection based onpre-learned physical features, here such features are pre-defined through the template application.

Preprocessing is achieved by applying the template ofskeletonization that masks out the features of shape andthickness of the veins. The order in which the templates ofskeletonization are applied influences the type, the numberand the direction of extracted features. Fig. 15 shows theoutput of bifurcation detection as the templates in Table 1 areapplied in the order: NW, N, NE, E, SE, S, SW and W.The algorithm is restricted to 2-dimensional black and

white images. This limitation increases unfortunately the rate

Page 6: [IEEE 2006 10th International Workshop on Cellular Neural Networks and Their Applications - Istanbul, Turkey (2006.08.28-2006.08.30)] 2006 10th International Workshop on Cellular Neural

of false detection, as vessels passing over each other in realitywill be treated as a cross-section in the 2-dimensional image(Fig. 16). The operation of False Feature Elimination iscrucial for the accuracy of the overall algorithm, as thenumber of false features as well as the total number ofextracted features is affected. Unfortunately, the currentalgorithm proves to be sensitive for image resolution.

(a) (b) (c)Fig. 15 A certain order of skeletonization templates applied on (a), results

in a false feature (b) instead of the real one (c).

The paper is illustrated by the example shown in [6], as thisprovides data on alternative methods for feature extraction.This paper reports that manually 65 junctions can be found inthe original images. AS the algorithm does not distinguishbetween true junctions and corners (points of high curvature),many of them were found not to be bifurcations or end-points. Contrary to what has been found in [14] it claims thata scale-space model gives also a large amount of falsefeatures. A careful look at Fig. 12.d shows that 28 of the 30extracted bifurcations are real while 2 are false featurescaused by the 2-dimensional mapping. This is still much lessthan 43 bifurcations that were manually counted.

_ 1 ?!W1l!; , 2-dimensional

l1 1 1 | F~alseprovidesdat on altemative methods bforfeatureextcation

Fig. 16 The non-crossing veins (marked with circle) give rise to falsebifurcation in the 2-dimesional image.

By removing the False Feature Elimination, the detectionrate is raised to 100% but the false detection rate is alsoincreased. It appears therefore that this algorithm is not fullyadequate. The reason seems to be a lack in image resolution.As claimed in [14], this can be solved by re-introducingfeedback to adapt the resolution before skeletonization isperformed, similar to the bio-inspiration claimed in [15]. Thisconfirms the setting of different block sizes in [6] and thevariety of 2nd layer networks in [14].

VII. CONCLUSIONS

The implementation stresses the exploitation of the FPGAas realization target. We have aimed at the best detectionusing few resources, as a realistic product will be based on bi-spectral imaging. The merging of features from two sources

will definitively raise the performance figures but poses

additional computational demands. Hence, the computational

need of the feedback contribution is removed. This providesfor a good starting position to extend the hardware withvariable resolution and 3-dimensional modeling.

Feature matching of two vein patterns depends on the type,the location and the direction of endings and bifurcations. It isdebatable, whether with all the existing variety it is reallyrequired to find all the existing bifurcations and endings tolimit the images that need to be inspected on per pixel basis.A larger experiment seems required to decide how much isenough.

VIII. ACKNOWLEDGMENT

The authors like to thank Lin Xue and Ren Huan for theircollaboration during the project. We have re-used for thedescription of the algorithmic base of our procedure, the veinpattern images that appear in [6].

IX. REFERENCES[1] Press Release, Precise Biometrics (2006, March). Available:

ml,.

[2] K. Munro, "Biometrics: attack of the clones," InfosecurityToday, vol. 3, issue. 1, pp. 45, January/February 2006.

[3] S.-K. Im et alieni, "A Biometric Identification System byExtracting Hand Vein Patterns," Journal of the KoreanPhysical Society, vol. 38, no. 3, March 2001, pp. 268-272.

[4] Hitachi Engineering Co. Ltd. (2006, March), "About FingerVein,". Available:hec.co.io/enc,lish/about fv.htm.

[5] Jean-Francois Mainguet (2006, June). Available:

[6] L. Wang and A. Bhalerao, "Detecting branching structuresusing local Gaussian models", Proceedings IEEE Symposiumon Biomedical Imaging, 2002, pp. 161-164.

[7] Q Gao and G. S. Moschytz, "Fingerprint Feature ExtractionUsing CNNs," European Conference on Circuit Theory andDesign, Espoo, Finland, 2001, pp. 97-100.

[8] S. Malki and L. Spaanenburg, "On Packet-SwitchedImplementation of Discrete-Time CNN," Euromicrosymposium on Digital System Design, Rennes, France, 2004,pp. 234-241.

[9] Q Gao, P. Forster, K. R. Mobus and G. S. Moschytz,"Fingerprint Recognition Using CNNs: FingerprintPreprocessing," IEEE International Symposium on Ciicuits andSystems, vol. 2, 2001, pp. 433-436.

[10] Q Gao and G. S. Moschytz, "Fingerprint Feature MatchingUsing CNNs," ISCAS, 2004, pp. 73-76.

[11] A. Jain, R. Bolle and S. Pankanti, Biometrics. PersonalIdentiJication in Networked Society, Kluwer AcademicPublishers, 1999.

[12] Tamas Roska et al., "CNN software Library (templates andalgorithms), vers. 7.3," Tech. Rep. DNS-CADET-15, Analogicaland Neural Computing Laboratory, Computer and AutomationResearch Institute, Hungarian Academy of Science, 1999.

[13] H. Harrer and J. A. Nossek, "Discrete-Time Cellular NeuralNetworks," International Journal of Circuit Theory andApplications, vol. 20, 1992, pp. 453-467.

[14] C. Grunditz, M. Walder, and L. Spaanenburg, "Constructing aneural system for surface inspection", Proceedings IJCNN, vol.III, Budapest, July 2004, pp. 1881 - 1886.

[15] Alpaydin, E. and Marchal, P., "Why an 'A' is an 'A',"Proceedings Journees d'Electronique, Lausanne, Switzerland,1989, pp. 88-103.