paper no: 200000 - college of engineering - purdue …mohtar/iet2007/073121.doc · web vieworange...

14

Click here to load reader

Upload: vokhuong

Post on 28-Mar-2018

216 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Send your completed paper to Sandy Rutter at [email protected] by 13 April 2007 to be included in the ASABE Online Technical Library.

If you can't use this Word document and you'd like a PDF cover sheet please contact Sandy.

Please have Word's AutoFormat features turned OFF and do not include live hyperlinks. Your paper should be no longer than 12 pages. For general information on writing style, please see http://www.asabe.org/pubs/authguide.html.

This page is for online indexing purposes and should not be included in your printed version.

Author(s)

First Name Middle Name Surname Role Email

Zhen LI [email protected]

Affiliation

Organization Address Country

College of Engineering, South China Agricultural University

WuShan, GuangZhou, GuangDong, 510642

P.R. China

Author(s) – repeat Author and Affiliation boxes as needed--

First Name Middle Name Surname Role Email

Tian-sheng HONG ASABE Member ID: M0330870

[email protected]

Affiliation

Organization Address Country

College of Engineering, South China Agricultural University

WuShan, GuangZhou, GuangDong, 510642

P.R. China

Publication Information

Pub ID Pub Date

073121 2007 ASABE Annual Meeting Paper

The authors are solely responsible for the content of this technical presentation. The technical presentation does not necessarily reflect the official position of the American Society of Agricultural and Biological Engineers (ASABE), and its printing and distribution does not constitute an endorsement of views which may be expressed. Technical presentations are not subject to the formal peer review process by ASABE editorial committees; therefore, they are not to be presented as refereed publications. Citation of this work should state that it is from an ASABE meeting paper. EXAMPLE: Author's Last Name, Initials. 2007. Title of Presentation. ASABE Paper No. 07xxxx. St. Joseph, Mich.: ASABE. For information about securing permission to reprint or reproduce a technical presentation, please contact ASABE at [email protected] or 269-429-0300 (2950 Niles Road, St. Joseph, MI 49085-9659 USA).

Page 2: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

An ASABE Meeting Presentation

Paper Number: 073121

Automatic Detection of Growing Orange Fruits by Machine Vision

LI Zhen, Ph.D CandidateSouth China Agricultural University, GuangZhou 510642, P.R. China.

[email protected]

HONG Tian-sheng, Ph.D, Professor, Corresponding AuthorSouth China Agricultural University, GuangZhou 510642, P.R. China.

[email protected]

WANG Wei-xing, Ph.D, ProfessorSouth China Agricultural University, GuangZhou 510642, P.R. China.

SONG Shu-ran, M.Eng, Assistant ProfessorSouth China Agricultural University, GuangZhou 510642, P.R. China.

Written for presentation at the2007 ASABE Annual International Meeting

Sponsored by ASABEMinneapolis Convention Center

Minneapolis, Minnesota17 - 20 June 2007

Mention any other presentations of this paper here, or delete this line.

Abstract. Orange color pictures with complicated background were divided and the objects in them were detected purposely by using the Mahalanobis distance algorithm which belongs to multi-element statistical analysis and the results were analyzed. The ratio of orange fruits in the pictures of different growing period was calculated. It indicated that the Mahalanobis distance algorithm could overcome the influences of the light and the complicated background, as long as the elements of the picture and its index were reasonably selected and confirmed. The results showed that by programming, it could divide the image and detect the objects effectively.

The authors are solely responsible for the content of this technical presentation. The technical presentation does not necessarily reflect the official position of the American Society of Agricultural and Biological Engineers (ASABE), and its printing and distribution does not constitute an endorsement of views which may be expressed. Technical presentations are not subject to the formal peer review process by ASABE editorial committees; therefore, they are not to be presented as refereed publications. Citation of this work should state that it is from an ASABE meeting paper. EXAMPLE: Author's Last Name, Initials. 2007. Title of Presentation. ASABE Paper No. 07xxxx. St. Joseph, Mich.: ASABE. For information about securing permission to reprint or reproduce a technical presentation, please contact ASABE at [email protected] or 269-429-0300 (2950 Niles Road, St. Joseph, MI 49085-9659 USA).

Page 3: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Keywords. Orange pictures, Mahanalobis distance, image clustering, object detection

The authors are solely responsible for the content of this technical presentation. The technical presentation does not necessarily reflect the official position of the American Society of Agricultural and Biological Engineers (ASABE), and its printing and distribution does not constitute an endorsement of views which may be expressed. Technical presentations are not subject to the formal peer review process by ASABE editorial committees; therefore, they are not to be presented as refereed publications. Citation of this work should state that it is from an ASABE meeting paper. EXAMPLE: Author's Last Name, Initials. 2007. Title of Presentation. ASABE Paper No. 07xxxx. St. Joseph, Mich.: ASABE. For information about securing permission to reprint or reproduce a technical presentation, please contact ASABE at [email protected] or 269-429-0300 (2950 Niles Road, St. Joseph, MI 49085-9659 USA).

Page 4: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

IntroductionThere are 20 provinces (areas, cities) that plant oranges in China .The planting area reaches to 1,330,000 hm2, but the average yield is 1/4 of that of the United States and 1/2 of the world. At the time of harvest, more than 90% operation is manipulated manually, and the reaping efficiency is very low (Shen, 2002). Currently the estimation of the orange yield is the analysis that carries on a macroscopical view according to the weather conditions (such as the temperature, decline amount of water, etc) to get the blurry results, for example, boost production, yield reduction, etc. In addition, it lacks of reasonable evaluation means to estimate the circumstance of yielding rate and the rate of maturation. Therefore, further exaltation towards the analysis accuracy of yielding rate estimation and the maturation circumstance, more reasonable and more effective prediction of the orange yield, as well as the instruction to the market is very necessary.

Along with digital picture technology development, it can identify oranges by the ways of dividing and detecting the photos that are taken in a certain area of crown of the fruit trees, and investigate the growing circumstance through the method of calculating the proportion of the fruit. It can also conduct the pesticide spray and the operation of the reaping mechanism by the means of object detection and positioning according to the growth picture taken in a close distance. Currently the most comprehensive way of image division and object detection is Threshold Segmentation (Sahoo, 1988; Wellner, 1993). The colorful pictures taken in the field are subjected to light influence and the contents are complicate. The Threshold Segmentation Method can't acquire more satisfied result under the situation that the colorful pictures target and the background gray degree are close by or confused. This experiment aims at different status of orange photographs , makes use of the Mahalanobis distance algorithm to divide picture more effectively, and achieve the target of orange withdraw and the fruit comparison calculation; then predict the yield of the orange with the comparison value as the basis data of the single stub orange growing circumstance analysis. This image division method can also be used for the machine vision and target spray pesticide technique.

Image division based on Mahalanobis distanceFigure1 (a) shows a one-way panorama color photograph taken from Shatang oranges when they are about 80% mature. Seen from the picture which is taken in a long distance, besides many elements (tree branch, leaves and fruits) of the orange tree, content of this picture is composed of many other elements, such as soil, miscellaneous grass, the sky background etc. Through this figure, we can analysis the fruit hanging circumstance in a certain direction to this stub Shatang orange tree and obtain a metered result of the hanging circumstance, which would be a foundation carried on budget the output of this stub; Figure1 (b) shows oranges with orange skin with the natural background (sky); Figure1 (c) shows the picture taken in a close distance, that photograph can be the basis that instruct automatically spray and picks organization activity. The target (the oranges) color is different in the three pictures. According to its color circumstance, elements in the picture is divided into six types, the classification are shown in Table1.

Pictures taken in real scene are usually subjected to severe influence of light. To eliminate this influence, Ying (Ying, 2004) aimed at each pixel in the image of orange and completed the emendation to pixel color by employing global image brightness emendation model in which orange radius the pixel located equals to orange radius to correct image brightness. But, the tests above were done in a specially made lighting box, and the target is a single orange model.

2

Page 5: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Table 1 Elements in different orange color picture

Image Class 1 Class 2 Class 3 Class 4 Class 5 Class 6

Fig 1 (a)Natural

background

Peak green leaves

Dark green leaves Fruit (yellow) Branches and grass ground

Fig 1 (b)Natural

background

Peak green leaves

Dark green leaves

Non-lighted fruit (orange color)

Lighted fruit (golden yellow)

Trunk and branches

Fig 1 (c) Non-lighted zone

Peak green leaves

Dark green leaves

Non-lighted fruit (yellow color)

Lighted fruit (light yellow)

Trunk and branches

This image clustering approach, which based on Mahanalobis distance, is used here to detect the object after translation, revolution and dimensions variety in gray level images when gray level value of background was largely different to the object in an image (Li, 2001). This experiment makes directly use of the color information in the colorful picture to cluster image and detect object based on Mahanalobis distance approach.

Mahanalobis distance, which belongs to Multi-elements Statistics of distance detection (He, 2004), considers the distance between each relative sample and their average, namely, weighted distance between two samples of a k-dimensional vector collectivity. The distance used here can not only solve the ownership problem of a space point, but distinguish far and near by calculating the distance that each point of sample relatives to a known point (Li, 1998) as shown in Equation (1):

(1)

Where F is the classification index of some pixel point, Ek is the average vector of the sample space, Sk is covariance of the awaiting classified element and sample space. M(F, k) is the surface of k-dimensional ellipsoid, whose center is Ek, a k-dimensional normal random variable. Compare M(F, k), if the M(F, n) ≤ min(k ≠ n)(M(F, k)), then put the awaiting classified element into type n.

To cluster image and detect object based on Mahanalobis distance approach, sample space should be selected firstly to test the arithmetic. Then the samples were selected rightly and the influence of light was taken into account too, so that the influence could be eliminated at a certain extent.

3

Figure1 Pictures of different kinds of orange

a. Picture of Shatang orange

c. Oranges with yellow skin

b Oranges with orange skin

Page 6: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Six types of element samples were selected in the text, and k=1,2,3,4,5, 6 are their number respectively. The estimation values of average vector and covariance matrix are E1, E2,…, E6;S1, S2,…, E6 and S1, S2,…, S6, respectively, and the latter are singular matrix.

To cluster image and detect object based on Mahanalobis distance approach, all kinds of sample space should be selected to test the arithmetic. There are 20 pixels selected as corresponding sample space in this experiment. But it did not mean that larger the number was, the better the result (Yuan, 2000). Because some pixel points that belong to others would be brought into this sample space. It can be found that the desired results were achieved in the three curves when the number of characteristic vector is 20 according to the sample piece and the examination correctness curve (as shown in Figure 2, the relationship between the number of characteristic vector and the examination correctness).

Therefore, each sample space contains 20 samples. The following are 6 kinds of index in every sample respectively:

R (red weight of colorful image pixel)

G (green weight of colorful image pixel)

B (blue weight of colorful image pixel)

P1=R/(R+G+B) (proportion of red weight of colorful image pixel in the gross color)

P2=G/(R+G+B) (proportion of green weight of colorful image pixel in the gross color)

P3=B/(R+G+B) (proportion of blue weight of colorful image pixel in the gross color)

The values of E1, E2…, E6; S1, S2…, S6 can be worked out according this and the following Equation (2).

(2)

4

20 20 20 20 20 20

, , , ,1 ,2 ,31 1 1 1 1 1

1 1 1 1 1 1( , , , , , )20 20 20 20 20 20k k l k l k l k l k l k l

l l l l l l

E R G B P P P

,1 ,2 ,3 ,4 ,5 ,6( , , , , , )k k k k k kX X X X X X

Figure 2 Relationship between eigenvector amount and measuring accuracy

Eigenvector amount

Mea

surin

g ac

cura

cy ra

tio

Page 7: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Where Ek is a 6×6 matrix, is the mean vector in this experiment. It is the multiple extension

of in unitary circumstance.

The calculation method of covariance matrix is that using the sample covariance matrix S2 to estimate the total covariance matrix Sk, as shown in Equation (3)

(3)

Where Sk,ij is defined in Equation (4):

(4)

In Equation (4), Xk,li denotes the value of the index point of number i (i=1-6) of the sample of number l (l=1-20), in the elements of type K , n=20 , denotes the number of the sample points. If M(F, k1)=M(F, k2)=min(M(F, k)) happens, make min(M(F, k))= M(F, k1).

The result of categorization is to get matrix G, as showed in Equation (5). In Equation (5), gi,j is the number of the type which categorized through the image element whose coordinate is (i,j) in the original picture. Using Equation (6) reconstructs the pictures can complete the abstraction of the target (orange), according to matrix G.

(5)

(6)

For example, the elements whose subscripts are (120:124,140:149) in matrix G are showed in Table 2.

Table 2 Part of the elements in matrix G

Row/Line Number 140 141 142 143 144 145 146 147 148 149

120 2 2 2 2 2 2 2 2 2 2

121 2 4 1 4 2 2 2 2 2 2

122 1 1 1 4 2 2 2 6 6 6

123 1 4 4 2 2 2 2 2 2 6

124 4 4 2 2 2 2 2 2 2 2

Using the calculation method as mentioned above, conducts the image division and object (orange) detection of the orange’s color image in Figure 1, and the results as shown in Figure 3 .Comparing Figure 1 and Figure 3, it could be found that the parts of white in the black and

5

[0, 0, 0] gi,j = 4, 5

[255,255,255] gi,j = 1,2,3,6Fi,j [R, G, B] =

Page 8: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

white picture basically reflect the orange’s growing status and growing position in the original color picture.

Image division error analysis and reduce Sample space was artificially selected by using Mahalanobis distance algorithm which divided images. The sample space of error could cause error in the process of image division (Wang, 2004). To reduce error brought by sample space selection, when programming, there was a process of sample analysis and determination between sample selected and sample exercised.

The data through examination could be determined to be the sample space. Through experiment, it was sure that the error was reduced in the process of image division in a certain extent.

Sample analysis includes: location of sample analysis in the image, the average values and analysis of standard error of six sorts for total 20 samples. The average value was the same as Equation (2), standard error formula for calculation is shown in Equation (7) and Equation (8):

(7)

(8)

Equation (7) calculates the standard error of all kinds of index, Xi meant all kinds of index samples, n=20; in Equation (8), Pk represents for the standard error.

An example was selected from Figure 1(c), the sample space of six sorts of index as shown in Table 3 and Table 4. Table 3 contains the average values while Table 4 contains standard error.

Table 3 Average value of six elements in the sampling space

ClassificationAverage Value

R G B P1 P2 P3

Non-lighted zone 236.25 226.45 235.15 0.2945 0.3459 0.3596

Peak green leaves 98.95 128.9 101.9 0.2972 0.3911 0.3117

Dark green leaves 230.25 170.35 30.2 0.5368 0.3908 0.0698

Unlighted fruit (yellow) 76.1 89.95 88.7 0.2730 0.3174 0.4096

Lighted fruit (golden) 247.3 245.9 201.45 0.3583 0.3558 0.2859

6

a Result of Shatang orange

Figure 3 Results of image division and object detection

c Result of yellow orangeb Result of orange picture

Page 9: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Trunk and branches 132.55 139.95 142.5 0.3277 0.3369 0.3356

It could be learned from Table 4 that the standard error of the sixth index was relatively larger because that the colors of trunks and branches and bole contained both green and black and had larger color range.

Table 4 Error criterion of six elements in the sampling space

ClassificationAverage Value

R G B P1 P2 P3

Non-lighted zone 11.815 7.8859 8.3223 6.1681 3.2497 6.5947

Peak green leaves 3.0036 2.5882 2.6737 9.5903 4.9069 13.5194

Dark green leaves 9.4492 2.2556 5.5497 4.8882 1.2468 5.5881

Unlighted fruit (yellow)

7.6565 7.9664 4.474 17.886 19.814 25.079

Lighted fruit (golden) 3.3322 3.3971 22.929 8.748 7.3097 19.55

Trunk and branches 33.123 35.455 36.496 14.948 3.9967 16.41

Calculation of the ratio of orange fruit in the imageThe method of area calculating is: by using the binary image from Figure 3, the proportion of the pixel in white area towards the pixel number of the whole image is calculated. The calculating result was used as the ratio of orange fruits in the image as in Equation (9):

(9)

Where, white image pixel values for the total was ∑Pw, total image pixel was ∑Pt, the ratio of area was Rl in the binary image. By calculating, the ratio of orange fruits area was 10.7376% in Figure 3 (a); the ratio of orange fruits area was 12.6023% in Figure 3 (b) and the ratio of orange fruits area was 33.2194% in Figure3 (c).

ConclusionOrange color images were divided and the objects in them were detected by using the Mahalanobis distance algorithm for sort of different elements of orange color image. This method could be overcome the influences of image division for light condition in a certain extent. The ratio of orange fruits area calculated could be used as a basis of products forecasted.

Samples used as standard was artificially determined, easy to bring unstable factors by using the Mahalanobis distance algorithm. It was suggested that this method which constructed feature space with massive sample statistics replaced the way artificially determined to eliminate unstable factors.

In the product forecasting, evaluation carried out by using the ratio of orange from a small number of images could cause relatively large error. Images of orange fruits with representative different orientation should be analyzed according to the growth mechanism of orange fruits.

7

Page 10: Paper No: 200000 - College of Engineering - Purdue …mohtar/IET2007/073121.doc · Web viewOrange color pictures with complicated background were divided and the objects in them were

Acknowledgements

This work was funded by the GUANGDONG Science and Technology Project: “Research of Non-destructive Fruit Quality Measurement based on the Hyper-spectral Image Technology”, the serial number of it is: 2006B50106002. We are grateful to the Precision Agriculture Lab in South China Agricultural University for providing us the basic instruments needed in the experiments.

ReferencesChao YUAN, Chang-shui ZHANG. 2000. Multiple Template Matching for Frontal-View Face Detection. ACTA ELECTRONICA SINICA, 28 (3): 95-98

Bin LI. 1998. Mahalanobis Distance Approach to Real Estate Appraisal. J. Wuhan Urban Construction Institute, 15 (2): 7-11

Han LI, Shiming JI, Li ZHANG. 2001. Gray image recognition using Mahalanobis distance invariable. Jorunal of ZHEJIANG University of Technology, 29 (4): 357-359

Ping HE. 2004. Mathematics and Scientific Statistics and Multi-element Statistics. CHENGDU, Press of Southwestern JiaoTong University: 161-163

Sahoo P, Solteni S, Wong A. 1988. A survey of thresholding techniques. Computer Vision, Graphics and Image Processing, 41 (2): 233-260

Wellner P D. 1993. Adaptive thresholding on the DigitalDesk. Technical Report EPC-1993-110, Rank Xerox Research Center, Cambridge Laboratory, England

Xue WANG, Yong LI. 2004. Research for MTS classification methods of parts shape error. Electrical measurement and instrumentation, 41 (5): 7-10

Yibin YING, Feng Fu. 2004. Color Transformation Model of Fruit Image in Process of non-destructive Quality Inspection Based on Machine Vision. Transaction of Chinese Agricultural Machinery, 35 (1): 85-89

ZhaoMin SHEN. 2002. Energetically improve macrocosm diathesis of Chinese orange productivity. South China Fruit Trees 31 (6): 24-26

8