intelligent portion identification system for poultry processing plant

5
Intelligent Portion Identification System for Poultry Processing Plant ADNAN KHASHMAN 1 , GULSUM Y. ASIKSOY 2 , HALIL FIKRETLER 3 1, 2, 3 Intelligent Systems Research Group (ISRG) 2, 3 Department of Electrical & Electronic Engineering Near East University Near East Boulevard, Nicosia N. CYPRUS { 1 amk, 2 gyildiz, 3 hfikretler}@neu.edu.tr http://isrg.neu.edu.tr Abstract: - The trend to replace human laborers with machines continues to increase as technology and processing devices continue to offer us higher speed and accuracy in different application areas. In poultry processing plants, the slaughter and portioning of birds is currently more hygienic as human operators are replaced with machines, thus reducing the risk of contamination from raw poultry to humans and vice versa. Nowadays, there is even more need for completely separating human operators from raw poultry, as diseases such as bird flu and swine flu are being declared as world epidemic. In many poultry processing plants, sorting the portions of a carcass is currently handled by human laborers; therefore, there is a need to provide an automated sorting system. In this paper, we present a novel intelligent system that uses image processing and a neural network to identify the different poultry portions. The proposed system is rotational invariant, and its output can be used to move robotic arms to physically separate the identified carcass portions into separate containers. Experimental results suggest that the proposed intelligent system can be effectively used in a poultry processing plant. Key-Words: - Poultry Processing, Chicken Portions, Automatic Sorting, Neural Networks, Intelligent System 1 Introduction In processing plants of most products, the sorting of the produce is of great importance. At this stage, the different products are sorted into separate containers; ready for packaging and then transportation to destined sellers and distributors. In our work we focus on a particular produce; which is raw poultry portions. In modern poultry processing plants [1],[2],[3], the slaughter of birds, and then cutting up the carcass is fully automated without the handling of human operators. This is important as it reduces the potential of contamination from human operators to the raw poultry carcasses and vice-versa [4]. The next process after cutting up the carcass is to separate or sort the portions into different containers, followed up by packaging. Few plants use an automated method based on the weight of a portion as an indicator for sorting [1],[5], and to the best of our knowledge, no plants uses a sorting method based on processing images of moving portions on a conveyor built. Moreover, many existing poultry processing plant continue to perform the sorting process manually, where a number of human laborers stand by a conveyor built and separate the different portions by hand based on their visual inspection. We have become more cautious about handling raw meat and poultry products since the spread of diseases such as the bird flu, and currently the swine flu. Therefore, finding a method to sort the different raw poultry portions without the physical handling and contact by human operators is of utmost importance [6]. In this paper we aim at addressing this problem, and, thus, propose a solution that would eliminate the direct human physical contact with raw poultry in a processing plant during the sorting process. Our method uses image processing and a neural network classifier to identify poultry portions, which is the first phase of an automated sorting system. In our experiments we use a raw chicken and its portions (breast, drumstick, fillet, leg, thigh, and wing) as the objects within our image database. These chicken portions are considered due to their popularity amongst customers in general. The rotational invariance property of our proposed intelligent system is a result of obtaining numerous images of the different portions at different orientation; here all parts are rotated by 30 o degree and their images are captured. The neural network Proceedings of the 1st International Conference on Manufacturing Engineering, Quality and Production Systems (Volume I) ISBN: 978-960-474-121-2 ISSN: 1790-2769 130

Upload: others

Post on 03-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Intelligent Portion Identification System for Poultry Processing Plant

ADNAN KHASHMAN 1, GULSUM Y. ASIKSOY 2, HALIL FIKRETLER 3 1, 2, 3 Intelligent Systems Research Group (ISRG)

2, 3 Department of Electrical & Electronic Engineering Near East University

Near East Boulevard, Nicosia N. CYPRUS

{1 amk, 2 gyildiz, 3 hfikretler}@neu.edu.tr http://isrg.neu.edu.tr

Abstract: - The trend to replace human laborers with machines continues to increase as technology and processing devices continue to offer us higher speed and accuracy in different application areas. In poultry processing plants, the slaughter and portioning of birds is currently more hygienic as human operators are replaced with machines, thus reducing the risk of contamination from raw poultry to humans and vice versa. Nowadays, there is even more need for completely separating human operators from raw poultry, as diseases such as bird flu and swine flu are being declared as world epidemic. In many poultry processing plants, sorting the portions of a carcass is currently handled by human laborers; therefore, there is a need to provide an automated sorting system. In this paper, we present a novel intelligent system that uses image processing and a neural network to identify the different poultry portions. The proposed system is rotational invariant, and its output can be used to move robotic arms to physically separate the identified carcass portions into separate containers. Experimental results suggest that the proposed intelligent system can be effectively used in a poultry processing plant. Key-Words: - Poultry Processing, Chicken Portions, Automatic Sorting, Neural Networks, Intelligent System 1 Introduction In processing plants of most products, the sorting of the produce is of great importance. At this stage, the different products are sorted into separate containers; ready for packaging and then transportation to destined sellers and distributors. In our work we focus on a particular produce; which is raw poultry portions.

In modern poultry processing plants [1],[2],[3], the slaughter of birds, and then cutting up the carcass is fully automated without the handling of human operators. This is important as it reduces the potential of contamination from human operators to the raw poultry carcasses and vice-versa [4]. The next process after cutting up the carcass is to separate or sort the portions into different containers, followed up by packaging. Few plants use an automated method based on the weight of a portion as an indicator for sorting [1],[5], and to the best of our knowledge, no plants uses a sorting method based on processing images of moving portions on a conveyor built. Moreover, many existing poultry processing plant continue to perform the sorting process manually, where a number of human laborers stand by a conveyor built

and separate the different portions by hand based on their visual inspection.

We have become more cautious about handling raw meat and poultry products since the spread of diseases such as the bird flu, and currently the swine flu. Therefore, finding a method to sort the different raw poultry portions without the physical handling and contact by human operators is of utmost importance [6].

In this paper we aim at addressing this problem, and, thus, propose a solution that would eliminate the direct human physical contact with raw poultry in a processing plant during the sorting process. Our method uses image processing and a neural network classifier to identify poultry portions, which is the first phase of an automated sorting system.

In our experiments we use a raw chicken and its portions (breast, drumstick, fillet, leg, thigh, and wing) as the objects within our image database. These chicken portions are considered due to their popularity amongst customers in general. The rotational invariance property of our proposed intelligent system is a result of obtaining numerous images of the different portions at different orientation; here all parts are rotated by 30o degree and their images are captured. The neural network

Proceedings of the 1st International Conference on Manufacturing Engineering, Quality and Production Systems (Volume I)

ISBN: 978-960-474-121-2ISSN: 1790-2769 130

acts as the identifier of the captured images, and its out can be used for further processing such as moving mechanical arms to physically separate the different portions into separate containers.

This paper is organized as follows: Section 2 presents the poultry portion image database, and describes the process of obtaining the images. Section 3 describes the intelligent identification system and reviews the two phases of its implementation. Section 4 presents the system implementation results. Finally, Section 5 concludes the work that is presented within this paper. 2 Poultry Portion Image Database The implementation of the work that is presented within this paper involves identifying the different portions of a cut-up poultry carcass. We use a medium-size chicken in our experiments. The bird is cut-up manually into several portions, and the most commonly preferred portions are used; namely, breast, drumstick, fillet, leg, thigh, and wing, as shown in examples in Figure 1.

Breast Drumstick

Fillet Leg

Thigh Wing

Fig. 1 Examples of poultry portions.

The chicken portions are placed against a white background, and their distance from the camera lens is fixed at 70cm. Each portion’s obverse (front) side and reverse (back) side were captured in consideration of the possibility that in a processing plant a portion may land on a conveyor belt with its obverse or reverse side facing the camera.

In order to assure the rotational invariance property of our proposed system, several images of the different portions were obtained with each portion being rotated by intervals of 30o, thus, resulting in 24 images of each of the six portions; which provides us in total with 144 poultry portion images to be used for implementing the proposed identification system. Figure 2 shows an example of a chicken portion at the considered orientations.

Captured images of the six chicken portions at (0o, 90o, 180o and 270o) degrees rotations will be used for training the back propagation neural network, thus, providing 48 poultry portion images for training (4 obverse and 4 reverse sides of each of the six portions). The remaining 96 portion images in the database (8 obverse and 8 reverse sides of each of the six portions at 30o, 60o, 120o, 150o, 210o, 240o, 300o, 330o) will be used for testing the trained neural network within the intelligent system.

0o 30o 60o

90o 120o 150o

180o 210o 240o

270o 300o 330o

Fig. 2 Rotations by 30o of a poultry leg.

Proceedings of the 1st International Conference on Manufacturing Engineering, Quality and Production Systems (Volume I)

ISBN: 978-960-474-121-2ISSN: 1790-2769 131

Breast 90o Drumstick 0o Fillet 180o

Leg 270o Thigh 90o Wing 0o Fig. 3 Examples of poultry portion training images.

Breast 120o Drumstick 150o Fillet 240o

Leg 300o Thigh 30o Wing 60o

Fig. 4 Examples of poultry portion testing images. This method of rotation using 30o degree interval is considered as sufficient to train the neural network for all possible rotations of a poultry portion on a conveyer belt. Figure 3 shows examples of the chicken portion training images, whereas Figure 4 shows examples of the testing images. 3 Intelligent Portion Identification System The implementation of the proposed intelligent poultry portion identification system consists of two phases: an image processing phase where chicken portion images undergo resizing, color conversion, and pattern averaging in preparation to be presented to the second phase; which is training a back

propagation neural network to identify the different portions. 3.1 Image Processing Phase The first phase of the proposed identification system involves preparing the training/testing image data for the neural network. Care must be taken in order to provide the neural network with sufficient data representations of the rotated poultry portions if we are to achieve meaningful learning, while attempting to keep the computational costs to a minimum.

Image processing is carried out in this phase, where images of the rotated chicken portions are captured in RGB color with the dimensions of 2592x1944 pixels. The images are resized to 100x100 pixels and converted to grayscale, which consists of pixel values between 0 and 255 using the Adobe Photoshop 7.0 Element software tool.

Pattern averaging is then applied to the gray 100x100 pixel images. Here, the image is segmented using 5x5 kernels and the pixel values within each kernel are averaged and saved as feature vectors for training the neural network. This method results in a 20x20 “fuzzy” bitmap that represents the different chicken portions at various rotations. Other segment sizes can also be used, however, the larger the segment size is, the higher the computational cost will be. A 5x5 segment size results in 20x20 feature vector bitmap, thus requiring 400 neurons in the neural network input layer. The averaging of the segments within an image reduces the amount of data required for neural network implementation thus providing a faster identification system. Pattern averaging can be defined as follows:

( )∑=

∑=

=l k

ilk

s

l

s

klkp

ssiPatAv1 1

,1 (1)

where k and l are segment coordinates in the x and y directions respectively, i is the segment number, Sk and Sl are segment width and height respectively, Pi(k,l) is pixel value at coordinates k and l in segment i, PatAvi is the average value of pattern in segment i that is presented to neural network input layer neuron i. The number of segments in each image of size XY pixels (X = Y = 100) containing a poultry portion, as well as the number of neurons in the input layer is i where i = (1, 2, 3, …, n), and:

⎟⎟⎠

⎞⎜⎜⎝

⎛⎟⎟⎠

⎞⎜⎜⎝

⎛=

lk sY

sXn (2)

Proceedings of the 1st International Conference on Manufacturing Engineering, Quality and Production Systems (Volume I)

ISBN: 978-960-474-121-2ISSN: 1790-2769 132

Previous works in [7],[8],[9] using this pre-processing method showed sufficient representation of the objects within the images and meaningful data within the averaged patterns were obtained to aid the neural network learning and classification. Pattern averaging provides meaningful learning and marginally reduces the processing time. For the work presented within this paper, pattern averaging overcomes the problem of varying pixel values within the segments as a result of rotation, thus, providing a rotation invariant system. Using a segment size of 5x5 pixels, results in a 20x20 bitmap of averaged pixel values that will be used as the input for the second phase, namely neural network training and testing. 3.2 Neural Network Training Phase The second phase of the proposed identification system is the implementation of a back propagation neural network classifier. This phase consists of training the neural network using the averaged patterns (feature vectors) obtained from the first phase. Once the network converges (learns), this phase will only comprise generalizing the trained neural network using one forward pass.

A 3-layer feed forward neural network with 400 input neurons, 30 hidden neurons and 6 output neurons is used to identify the chicken portions and classify them into: breast, drumstick, fillet, leg, thigh, or wing. The number of neurons in the input layer is dictated by the number of averaged segments in the 20x20 bitmap.

The choice of 30 neurons in the hidden layer was a result of various training experiments using lower and higher hidden neuron values. The chosen number assured meaningful training while keeping the time cost to a minimum.

The six neurons in the output layer represent the six chicken portions. The activation function used for the neurons in the hidden and output layers is the sigmoid function. During the learning phase, initial random weights of values between –0.3 and 0.3 were used. The learning rate and the momentum rate were adjusted during various experiments in order to achieve the required minimum error value of 0.005; which was considered as sufficient for this application. Figure 5 shows the topology of the neural network.

The neural network is trained using only 48 chicken portion images out of the available 144 portion images. The 48 training images are of rotated portions at (0o, 90o, 180o and 270o degrees). The remaining 96 portion images are the testing images which are not exposed to the network during training and shall be used to test the robustness of the trained neural network in identifying the portions despite the rotations. 4 Experimental Results The simulation of the proposed system and the experimental results were obtained using a 2.8 GHz PC with 2 GB of RAM, Windows XP OS and Borland C++ compiler.

The neural network learnt and converged after 2052 iterations and within 68.42 seconds, whereas the running time for the neural network after training and using one forward pass was 6.46x10-4 seconds. Table 1 lists the final parameters of the successfully trained neural network, and the correct identification rates. The robustness, flexibility and speed of this identification system have been demonstrated through this application.

Poultry Portion Pattern

1

2

3

398

399

400

1

2

30

Input Layer

Hidden Layer Output

Layer

1

2

3

4

5

6

Breast

Wing

Thigh

Leg

Fillet

Drumstick

Gray original image 2592x1944 pixels

Resized image 100x100 pixels

Fig. 5 The poultry portion identification neural network topology.

Proceedings of the 1st International Conference on Manufacturing Engineering, Quality and Production Systems (Volume I)

ISBN: 978-960-474-121-2ISSN: 1790-2769 133

Table 1. Neural network final parameters and correct identification rates (CIR).

Input Neurons 400 Hidden Neurons 30 Output Neurons 6 Learning Coefficient 0.0076 Momentum rate 0.33 Minimum Error 0.005 Iterations 2052 Training time (seconds) 68.42 Run time (seconds) 6.46 x 10-4 CIR – Training images (48/48) 100% CIR – Testing images (84/96) 87.5% CIR – Overall (132/144) 91.67%

The implementation results of the trained

intelligent system were as follows: using the training image set (48 images) yielded 100% recognition as would be expected. The intelligent system implementation using the poultry portion testing images (96 images that were not previously exposed to the neural network) yielded correct poultry portion identification of 84 images, thus achieving 87.5% correct identification rate. Combining the results using training and testing images yields an overall correct identification rate of 91.67%.

5 Conclusion This paper presented a rotation-invariant intelligent system for identifying poultry carcass portions using image processing and a trained neural network. Images of six chicken portions (breast, drumstick, fillet, leg, thigh, and wing) that were rotated at intervals of 30o degrees were used as the database during the development of the intelligent identification system.

The system uses image processing in its first phase, where averaged segments of a portion’s image are obtained and used as the input to the neural network in the second phase. The averaged patterns form fuzzy feature vectors that enable the neural network to learn the various rotations of the objects. Thus, reducing the computational costs and providing a rotation invariant identification system.

A real life application has been successfully implemented, as shown in this paper, to identify commonly preferred chicken portions prior to sorting in a poultry processing plant. The proposed identification system can be used as the first phase of an automated portion sorting system in order to prevent human intervention and contact with raw

poultry carcass portions, thus reducing the risk of contamination for both the human operator and the consumer product (chicken portions in this work).

An overall 91.67% correct identification of the six portions at different orientation and with obverse and reverse views has been achieved. Rotation by intervals of 30o degrees provides sufficient poultry portion image database for a robust learning of the neural network within the developed rotation-invariant identification system. These results are very encouraging when considering the time costs. The neural network training time was 68.42 seconds, whereas the system run time for both phases (image processing and neural network generalization) was 6.46x10-4 seconds. Therefore, the proposed system can be potentially used in a poultry processing plant as part of a sorting system. References: [1] A.R. Sams, Poultry Meat Processing, CRC

Press, 2001. [2] G.J. Van Hoogen, Poultry Processing:

Developing new Tools to be Competitive, XVIIth European Symposium on the Quality of Poultry Meat, 2005, pp. 282-284.

[3] Gainco Inc., http:// www.gainco.com. Accessed online: June 2009.

[4] K.M. Keener, M.P. Bashor, P.A. Curtis, B.W. Sheldon, and S. Kathariou, Comprehensive Review of Campylobacter and Poultry Proc., Comprehensive Reviews in Food Science and Food Safety, Vol. 3, 2004, pp. 105-116.

[5] Gainco Inc., In-Motion Portion Sizing and Distribution Equipment, GS-2500, http://www.gainco.com/inmotionportion.htm, Accessed online: June 2009.

[6] A. Bardic, Poultry in motion: demand for labor-saving automated poultry processing equipment escalates, National Provisioner, 2004, pp. 1-8.

[7] A. Khashman, Intelligent Local Face Recognition,. In K. Delac, M. Grgic, M. Stewart Bartlett (Eds.), Recent Advances in Face Recognition, Ch. 5, IN-TECH, Vienna, Austria, December 2008.

[8] A. Khashman, Blood Cell Identification Using a Simple Neural Network, Inter. Journal of Neural Systems, Vol. 18, No. 5, 2008, pp. 453-458.

[9] A. Khashman, B. Sekeroglu, and K. Dimililer, ICIS: A Novel Coin Identification System, Lecture Notes in Control and Information Sciences, Vol. 345, 2006, pp. 913-918.

Proceedings of the 1st International Conference on Manufacturing Engineering, Quality and Production Systems (Volume I)

ISBN: 978-960-474-121-2ISSN: 1790-2769 134