diplomarbeit im fachbereich elektrotechnik & informatik an der … › haw › volltexte ›...

76
Bachelor Thesis Rinaldy Juan Sutyono Development of an OpenCV Solution for Recognition and Analysis of Plant Growth Processes in a Digital Learning Module Fakultät Technik und Informatik Department Fahrzeugtechnik und Flugzeugbau Faculty of Engineering and Computer Science Department of Automotive and Aeronautical Engineering

Upload: others

Post on 06-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Bachelor ThesisRinaldy Juan Sutyono

Development of an OpenCV Solution forRecognition and Analysis of Plant GrowthProcesses in a Digital Learning Module

Fakultät Technik und InformatikDepartment Fahrzeugtechnik undFlugzeugbau

Faculty of Engineering and Computer ScienceDepartment of Automotive andAeronautical Engineering

Page 2: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Rinaldy Juan Sutyono

Development of an OpenCV Solution forRecognition and Analysis of Plant GrowthProcesses in a Digital Learning Module

Bachelor Thesis based on the study regulationsfor the Bachelor of Science degree programmeMechatronicsat the Department of Automotive and Aeronautical Engineeringof the Faculty of Engineering and Computer Scienceof the Hamburg University of Applied Sciences

Supervising examiner : Prof. Dr. -Ing. Andreas MeiselSecond Examiner : Prof. Dr. -Ing. Hans Peter Kölzer

Day of delivery 1. November 2019

Page 3: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Rinaldy Juan Sutyono

Title of the BachelorthesisDevelopment of an OpenCV Solution for Recognition and Analysis of Plant GrowthProcesses in a Digital Learning Module

KeywordsImage Processing, OpenCV, Plant Growth, Raspberry Pi

AbstractOn the use-case of the experiment of plant growth analysis, which is a part of SmartClassroom’s project, the growth process of a plant will be analysed by tracking itsheight regularly. The purpose of this thesis is to develop a prototype implementation,that serves to analyse the growth process of a plant by using image processing. Thisthesis includes the analysis of the method for object localization and the developmentof an OpenCV based Prototype for analyzing plant growth.

Rinaldy Juan Sutyono

Titel der ArbeitEntwicklung einer OpenCV-Lösung zur Erkennung und Analyse von Pflanzenwachs-tumsprozessen in einem digitalen Lernmodul

StichworteBildverarbeitung, OpenCV, Pflanzenwachstum, Raspberry Pi

KurzzusammenfassungIm Anwendungsfall des Experiments der Pflanzenwachstumsanalyse, das Teil desSmart Classroom Projekts ist, wird der Wachstumsprozess einer Pflanze analysiert,indem ihre Höhe regelmäßig verfolgt wird. Der Zweck dieser Arbeit ist es, einen Proto-typ zu entwickeln, der dazu dient, den Wachstumsprozess einer Pflanze mittels Bild-verarbeitung zu analysieren. Diese Arbeit umfasst die Analyse der Methode zur Ob-jektlokalisierung und die Entwicklung eines OpenCV-basierten Prototyps zur Analysedes Pflanzenwachstums.

Page 4: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Table of Contents

List of Tables 6

List of Figures 7

1. Introduction 91.1. Smart Classroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.1.1. Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.1.2. Camera as an Intelligence Sensor . . . . . . . . . . . . . . . . . . . 11

1.2. Objective and Delimitation . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.2.1. Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.2.2. Delimitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2. Theory 132.1. Template Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2. Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2.1. Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2.2. Adaptive Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.3. Feature-Based Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.1. Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.2. Contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4. Appreance-Based Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.5. Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3. Analysis 213.1. Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.2. Non-Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 223.3. Catalogue of Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 233.4. Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.4.1. Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . 243.5. Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.6. Discussion on the Method of Object Detection . . . . . . . . . . . . . . . . . 26

3.6.1. Template Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.6.2. Image Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Page 5: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Table of Contents 5

3.6.3. Feature-Based Method . . . . . . . . . . . . . . . . . . . . . . . . . 273.6.4. Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.6.5. Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.7. Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4. Concept 29

5. Implementation 345.1. Plant Growth Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.1.1. Calculate Plant Height . . . . . . . . . . . . . . . . . . . . . . . . . 395.1.2. Plotting the Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.2. Web Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

6. Result and Evaluation 456.1. Result: Software Algorithm - Illuminance Level . . . . . . . . . . . . . . . . 45

6.1.1. Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.1.2. Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

6.2. Result: Experiment Plant Growth Analysis . . . . . . . . . . . . . . . . . . . 526.2.1. Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

7. Summary 56

Bibliography 61

A. Result of Plant Growth 64

B. Python Code: Calculate Plant Height 67

C. Python Code: Generate Scale 69

D. Python Code: Scheduled Measurement 70

E. Frontend 74

Page 6: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

List of Tables

1.1. List of Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.1. Requirements Catalogue: Functional . . . . . . . . . . . . . . . . . . . . . 233.2. Requirements Catalogue: Non-Functional . . . . . . . . . . . . . . . . . . . 23

6.1. Result of the test under different illuminance levels . . . . . . . . . . . . . . 456.2. Result of the plant growth test . . . . . . . . . . . . . . . . . . . . . . . . . 52

7.1. Review of functional requirements . . . . . . . . . . . . . . . . . . . . . . . 58

Page 7: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

List of Figures

1.1. Hardware Kits Smart Classroom . . . . . . . . . . . . . . . . . . . . . . . . 101.2. Experiment of Plant Growth Analysis: Plant with Paper Scale . . . . . . . . . 111.3. Illustration of Plant Growth Analysis . . . . . . . . . . . . . . . . . . . . . . 11

2.1. Template matching: Source image [Swaroop und Sharma (2016a)] . . . . . . 142.2. Template matching: Target image [Swaroop und Sharma (2016a)] . . . . . . 142.3. Result of template matching [Swaroop und Sharma (2016a)] . . . . . . . . . 142.4. Example of thresholding: left input image, right output image [opencv.org] . . 15

3.1. Plant growth analysis in bright environment . . . . . . . . . . . . . . . . . . 213.2. Plant growth analysis in dark rnvironment . . . . . . . . . . . . . . . . . . . 213.3. Pinhole Camera Model [Hata und Savarese] . . . . . . . . . . . . . . . . . . 243.4. Pinhole camera mathematical model [Bradski und Kaehler (2008)] . . . . . . 24

4.1. Flow chart of algorithm for plant growth analysis . . . . . . . . . . . . . . . . 294.2. Concept of experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . 304.3. Example of contour of reference object . . . . . . . . . . . . . . . . . . . . 314.4. Example of contour of plant . . . . . . . . . . . . . . . . . . . . . . . . . . 314.5. Reference object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.1. Flow chart of implemented algorithm for plant growth analysis . . . . . . . . 345.2. Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.3. Filtered input image with shadow . . . . . . . . . . . . . . . . . . . . . . . 355.4. Result of absolute difference between input image and filtered image. . . . . 365.5. Result of image segmentation using Otsu’s Binarization . . . . . . . . . . . . 365.6. Result of edge detection using canny method . . . . . . . . . . . . . . . . . 365.7. Result of dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.8. All found contours in image . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.9. Bounding rectangle around all found contour . . . . . . . . . . . . . . . . . 385.10.Rectangle around the outermost contour of the plant . . . . . . . . . . . . . 385.11.Plant with reference object . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.12.Result of Plant Growth Analysis Software . . . . . . . . . . . . . . . . . . . 395.13.Illustration of scaling unit for the scale . . . . . . . . . . . . . . . . . . . . . 41

Page 8: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

List of Figures 8

5.14.Plant with scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.15.Frontend: Reference object section . . . . . . . . . . . . . . . . . . . . . . 435.16.Frontend: Software section . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

6.1. Result-1: output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.2. Result-1: Segmentation result . . . . . . . . . . . . . . . . . . . . . . . . . 466.3. Result-2: output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476.4. Result-2: Segmentation result . . . . . . . . . . . . . . . . . . . . . . . . . 476.5. Result-3: Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.6. Result-3: Segmentation result . . . . . . . . . . . . . . . . . . . . . . . . . 486.7. Result-3: Result of dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.8. Result-4: Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496.9. Result-4: Segmentation result . . . . . . . . . . . . . . . . . . . . . . . . . 496.10.Result-4: Contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506.11.Error case: High illuminance with shadowing . . . . . . . . . . . . . . . . . 506.12.Error case: High illuminance - segmentation result . . . . . . . . . . . . . . 506.13.Error case: High illuminance - contour . . . . . . . . . . . . . . . . . . . . . 516.14.Growth process of the plant in ten days . . . . . . . . . . . . . . . . . . . . 546.15.Simulated growth process of the plant . . . . . . . . . . . . . . . . . . . . . 54

A.1. Plant growth: Day 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.2. Plant growth: Day 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.3. Plant growth: Day 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.4. Plant growth: Day 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.5. Plant growth: Day 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.6. Plant growth: Day 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.7. Plant growth: Day 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.8. Plant growth: Day 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.9. Plant growth: Day 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.10.Plant growth: Day 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65A.11.Plant growth: Day 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66A.12.Plant growth: Day 10 - extra . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Page 9: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

1. Introduction

Nowadays, computer vision, which can also be called "image-processing" to a certain extent,is used more and more in various areas. These include everyday applications such as facerecognition, which is now integrated into almost every mobile phone or lane recognition inautomobiles, where it is used for autonomous driving. These applications are only a smallpart of the constantly and rapidly developing world of computer vision.Object recognition in image-processing is a complex process that has many uncertainties(Peterwitz, 2006, p. 15). Object recognition in image-processing then touches the opticalprocess. Beside object recognition, there is also a common application of image-processingin the industry, which is localization of an object.This works deals mainly with localizing objects in an image. In the process, a tool such as acamera takes a picture, and then the captured image can be processed by software using acertain method. In order to reach a certain goal, a particular method is needed. Therefore,an analysis of this particular method is necessary in order to achieve the specific goal. Theobjective of this paper will be derived from a project called "Smart Classroom" and describedin more detail in the section Objective and Delimitation, see section 1.2.

1.1. Smart Classroom

Smart Classroom is a project developed by Capgemini in cooperation with Dataport whoseaim is to impart digital competences to students from secondary level I onwards through anIoT learning platform Capgemini-Service-SAS (2017). Smart Classroom is a system thathelps teachers actively promote these skills in a playful way by using sensors and actuators.There are already three concrete practical use cases (standard experiments) for teaching,which can be used in addition to experiments created by the students themselves:

• Plant Growth Analysis

• Intelligent Device Control

• Heat-loss Analysis

Page 10: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

1. Introduction 10

For this thesis, the experiment of Plant Growth Analysis is relevant. The experiment of PlantGrowth Analysis will be described in more detail in section Camera as an Intelligence Sen-sor, see section 1.1.2.

1.1.1. Components

The Smart Classroom project consists of various components. The hardware is stored ina suitcase as shown in figure 1.1. The case contains a Raspberry Pi as a main controlunit. In addition, the case contains further accessories for sensors that are required for theexperiments.

Figure 1.1.: Hardware Kits Smart Classroom

The main components of this paper, that are to be mainly used, are listed in the table 1.1below:

No. Hardware Model Role1 Raspberry Pi 3B Main Control Unit2 Camera D-Link: DCS-2132L Take Picture3 Multisensor Fibaro FGSM0001 Measure Illuminance

Table 1.1.: List of Hardware Components

Page 11: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

1. Introduction 11

1.1.2. Camera as an Intelligence Sensor

In the standard experiment "Plant Growth Analysis", a camera is used as a sensor. Themodel of the camera is D-Link: DCS-2132L. The camera takes a picture of an object, whichis in this case a plant. As shown in Figure 1.2, until now a piece of paper with a printed scaleis located behind the plant and has been used as a tool for evaluating the height of the plant.During the evaluation, the pupils input the value for the height of the plant in the front end,which they themselves have read off from the scale

Figure 1.2.: Experiment of Plant Growth Analysis: Plant with Paper Scale

So far, there are three measurement quantities that can be recorded with the camera: length,number, and surface. As for this paper, the measurement quantity length is relevant.In the experiment of plant growth analysis, a height (measurement quantity "length") of aplant will be tracked by taking a photo of the plant regularly. The other two measurementquantities are used in another experiment, which are not relevant for this work.

Figure 1.3.: Illustration of Plant Growth Analysis

Page 12: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

1. Introduction 12

As shown in figure 1.3, there is also a multisensor used to measure the intensity of the lightin the environment. The sensory data as well as the picture, which is taken from the camera,will be sent to the platform by Raspberry Pi.

1.2. Objective and Delimitation

The aim of this bachelor thesis is to develop a prototype implementation in the form of anOpenCV-based software, that serves to localize an object. Within the scope of the smartclassroom project, the object to detect is a plant. The software should be able to recognizethe growth process of it by regularly calculating the height of the plant and plotting a scale,that represents the height of the plant.

1.2.1. Research Question

Since there are several methods that can be used to localize an object, the main question is,which method is suitable for the use-case of "Plant Growth Analysis"?. This paper is intendedto answer the following questions:

• Which method of image-processing is suited for the recognition of plant growth pro-cesses?

• How reliable is such a method in recognizing plant growth?

• Which criteria are decisive and must be considered for the recognition of plant growthprocesses?

1.2.2. Delimitation

In this thesis, only the software for the localizing and calculating the height of an object isto be developed. An integration into the existing system of smart classroom was not yetplanned at the time this thesis was written.

Page 13: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory

To find or localize an object in an image, there are different methods of image-processingavailable; for example, template matching, which compares a current image with an olderimage. Another approach is the segmentation of the image, which involves separating theforeground from the background. In this section, several methods for object localization willbe described.

The simplest method to localize an object would be to photograph the object, which is tobe found and then compare this pixel by pixel with another image which contains the object(Lambers und Lordemann, 2003, p.8). This type of method is also known as "templatematching".

Besides, there are some other methods of object recognition. These are divided into twomain types, namely, model-based detection and appearance-based detection (Lambers undLordemann, 2003, p.9).

2.1. Template Matching

Template Matching is a technique that allows to identify the parts of an image that match thegiven image (Swaroop und Sharma, 2016a, p.988). There are major components neededfor template matching, source image, and template image. Source image (SI) is the image,in which it is expected to find a match to the template image. Template image (TI) is theimage that will be compared to the SI. Figure 2.1 and 2.2 show an example of source imageand target image respectively. In this example, the source image contains various fruits orvegetables. The object to be found is bell pepper, which means that the template imageshould contain only an image of a bell pepper.

Page 14: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 14

Figure 2.1.: Template matching: Sourceimage [Swaroop und Sharma(2016a)]

Figure 2.2.: Template matching: Targetimage [Swaroop und Sharma(2016a)]

The aim of this method is to specify with a certain probability, whether the SI contains anobject corresponding to the TI (Jähne u. a., 1996, p. 95-96). The TI describes as precisely aspossible the object to be searched for and its embedding in the environment. The templateshould have the same size and orientation as the searched object. The result of templatematching approach is shown in figure 2.3.

Figure 2.3.: Result of template matching [Swaroop und Sharma (2016a)]

Page 15: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 15

2.2. Image Segmentation

Segmentation is one of the most important steps in image-processing. Image segmentationin image-processing consists of separating foreground from the background (Bradski undKaehler, 2008, p. 265).It is decided, whether a pixel belongs to the object or to the back-ground. Until now, pixel-precise segmentation methods have dominated image-processing.(Jähne u. a., 1996, p. 176).

2.2.1. Thresholding

Threshold segmentation is the simplest method of image segmentation and also one of themost common parallel segmentation methods (Yuheng und Hao, 2017, p. 1). Thresholdingis a kind of method that filters a pixel based on a reference pixel value or threshold value. Itsimply converts a pixel value/pixel intensity to a new pixel value, e.g., 200.If a pixel value is less than the threshold value, then this pixel value will be changed to 0 orreferenced as black, otherwise it will be changed to the new pixel value, which is 200 in thisexample (Peterwitz, 2006, p. 3).

Figure 2.3 shows an example of thresholding. For example, the left image is the imagethat is going to be segmented using the thresholding method. The right image is the resultof the thresholding.

Figure 2.4.: Example of thresholding: left input image, right output image [opencv.org]

Thresholding can be used in various ways: with one or more, with static, or dynamic thresh-olds. If more than one threshold is used, each threshold forms the upper or lower limit of acolor range in the histogram (Peterwitz, 2006, p. 4).

Page 16: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 16

2.2.2. Adaptive Thresholding

While simple thresholding algorithms use a global threshold for all pixels, adaptive thresh-olding changes the threshold value during the process.

Otsu Binarization

This method was named after Nobuyuki Otsu in 1979. It is the most commonly usedthreshold segmentation algorithm and it is the largest interclass variance method (Peterwitz,2006, p. 4). This method looks for the threshold value which has the minimum within theclass variance or inter-class variance. The inter-class variance can be defined as twoclasses weighted sum of variances.In this method, the algorithm assumed that the image is classified into two classes of pixels:foreground and background. Then, these two classes are separated in order to minimizetheir respective inter-class variance, which led to optimum threshold values. (Roy u. a.,2014, p. 1321). This can be useful if there are strong differences in illumination or reflectiongradients in the image (Bradski und Kaehler, 2008, p. 139). Depending on the lightingsituation, the threshold value adapts to the conditions.

2.3. Feature-Based Method

Segmentation, which is based on intensity and color differences, is a pre-processing stepbut is often not sufficient when searching for specific image content. Therefore, it may benecessary to search for specific features, e.g., edge, in images that correlate to objects(Peterwitz, 2006, p. 7). Beside template matching, which is a part of model-based method(Peterwitz, 2006, p. 15), there is a feature-based approach available. The feature-basedapproach uses the geometric features such as edge, contour, or surface area as a basis.The feature-based methods are described below.

2.3.1. Edge Detection

Beside color, shape is the most important characteristic of an object (Reul, 2015, p. 12),(Lambers und Lordemann, 2003, p. 9). An edge usually forms the end of an object but canalso occur in other contexts. An edge in an image describes an area over which the pixelintensity significantly changes (Lambers und Lordemann, 2003, p. 9). However, not everyintensity difference is an edge. The change of intensity is also called "gradient".

Page 17: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 17

"There is always a gray edge between two adjacent regions with different gray valuesin the image, and there is a case where the gray value is not continuous. This discontinuitycan often be detected using derivative operations, and derivatives can be calculated usingdifferential operators." (Yuheng und Hao, 2017, p. 2).

In order to emphasize this change of adjacent pixels, special filter matrices are used,which are also called edge operators (Reul, 2013, p. 17). There are some operators for edgedetection such as Laplace Operator and Sobel Operator.

According to Reul (2013), the Laplace operator is rarely used in practice for edge de-tection due to its extreme susceptibility to noise (Reul, 2013, p. 19). The most frequentlyused filters are the Sobel operators (Reul, 2015, p. 12).

Sobel Operator

"The Sobel operator is a typical edge detection operator based on the first derivative. Thesobel operator consist of two 3x3 matrices (Yuheng und Hao, 2017, p. 3), which are shownas follows:

Gx =

�1 0 1

�2 0 2

�1 0 1

(2.1)

Gy =

�1 �2 �1

0 0 0

1 2 1

(2.2)

These above matrices are the sobel kernel or matrix, which later convoluted with the imagepixels. These matrices separately calculate the value of the gradients for vertical (Y) andhorizontal (X) direction. The two gradient values are then later combined to calculate thesize of the gradient (Yuheng und Hao, 2017, p. 3), which is also called "the magnitude of thegradient (G)":

G =

√G2

x + G2y (2.3)

Furthermore, the direction of the gradient (') could also be calculated, and thus the directionof the detected edge. The calculation is as follows:

' = arctan

(Gy

Gx

)(2.4)

Page 18: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 18

If the angle (') is equal to zero, then the image has a vertical edge, and the left is darkerthan the right (Yuheng und Hao, 2017, p. 3), means positive values correspond to a counter-clockwise rotation (Reul, 2013, p. 20).

Canny Edge Detector

One of the most commonly used method to detect edges is called "the Canny Edge De-tector". This method was further developed by John Canny in 1968 and is now commonlyreferred to as "the Canny Edge Detector" Canny (1986). It is a robust method for detectingedges in image and combines several image-processing methods into a complex algorithm(Schwenzer, 2015, p. 22).

The Canny Edge Detector runs in the following order (Moeslund, 2012, p. 88):

1. Gaussian Smoothing2. Sobel Filter3. Non-Maximal Suppression4. Threshold Operation

The image is first smoothed with the Gaussian filter with the dimension of 5x5 (Reul, 2015,p. 13). This makes the algorithm insensitive to image noise (Canny, 1986, p. 691). Then theimage pixels are convoluted with the two Sobel kernels, as shown in equation 2.1 and 2.2.

For each pixel, the magnitude of the image gradient is compared with its two nearest neigh-bors in the direction of the gradient. Only the pixel with the largest image gradient is furtherinvestigated (Moeslund, 2012, p. 88). This process is called non-maximal suppression. Thelast step consist of a threshold operation in the form of a hysteresis. For this, two limit valuesare needed: an upper and a lower. The lower value is usually half of the value of the upper.If the gradient of a pixel is above the upper limit, it is then marked as an edge pixel. If thegradient is smaller than the lower limit, the pixel is rejected. If the gradient lies between bothlimits and one of its direct neighbors has a gradient that is above the upper limit,it is thentreated as an edge pixel (Bradski und Kaehler, 2008, p. 152).

2.3.2. Contour

Object contours are a strong representation of shape (Schlecht und Ommer, 2011, p. 1).There is a fundamental difference between edges and contours. Edges only describe localmaximums of intensity gradients in an image. Consequently, result images are often verynoisy, and the edge points often have no connection to each other (Reul, 2015, p. 14).Anexception are the edges detected by the Canny algorithm, since the Canny method not only

Page 19: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 19

tries to find all edge points within an image, but also unites them to a contour (Reul, 2013,p. 21). Contours, on the other hand, always describe a coherent set of points. Basically, thegoal of contour detection is to detect connected edge points. (Reul, 2015, p. 14).

2.4. Appreance-Based Method

The development of appearance-based method emerged from the question of whetherthe shape, i.e., the geometric properties of an object are sufficient. Previous approaches(feature-based) only considered geometric features such as edges or surface area. Com-pare to model-based method, the apperance-based method use the non-geometric featuressuch as color or reflectance (Lambers und Lordemann, 2003, p. 15).

Histogram

One example of the appearance-based approach involves the recognition with histograms.By using histogram as the basis, the color distribution of an object, edge gradient templateof an object, and even distribution of probabilities representing the current hypothesis of anobject’s location can be retrieved (Bradski und Kaehler, 2008, p. 193). Color-based methodsare ideal for objects with a complex surface, i.e., objects where geometric features such ascorners or edges are difficult to extract. The colors are managed in a histogram (Lambersund Lordemann, 2003, p. 17).

2.5. Machine Learning

The goal of previously mentioned approaches is to extract useful information from an image.By using machine learning approach, the goal is to turn data into information, for exampleby extracting rules or pattern from that data using clustering algorithm or classifier algorithm(Bradski und Kaehler, 2008, p. 459). The difference between classification and clusteringis that, classifier/classification is used in supervised learning where data vectors areassigned with predefined labels. On the other hand, clustering is used in unsupervisedlearning where data vectors do not have labels. Similar instances are grouped, basedon their features or properties (Bradski und Kaehler, 2008, p. 461). Clustering is the taskto learn, e.g., assigning a label based on an unlabeled amount of data (Burkov, 2019, p. 154).

One possible way of using clustering algorithm in scope of image-processing is seg-mentation using K-Means algorithm, as shown in work of Reul (2015).

Page 20: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

2. Theory 20

K-Means Clustering

K-Means clustering is the most frequently used clustering method which is about unsu-pervised machine learning algorithms. The idea of "K-Means clustering" is about assigninga given set of N instances, each with a set of properties V(xi), to K clusters. The number Kof clusters is specified by the user. The aim is that, after clustering is applied, the instanceswithin a cluster are as similar as possible and instances from different clusters are asdifferent as possible (Reul, 2015, p. 47).

On the application within the segmentation, each pixel corresponds to one instance. Itsproperty vector contains the color values of the pixel: red, green, and blue. The amount ofclusters (K) is equal to 2: the first cluster contains the foreground pixels, the second onecontains the background pixels.

Besides, machine learning is also often used for classification, or recognition of an object.Object recognition systems have to "learn" the object they are supposed to recognize. Thisis often done by extracting features from an object image (e.g., edge direction or edge mag-nitude) and saving them (Lambers und Lordemann, 2003, p. 21).

But there are also other ways. Other object recognition systems do not "learn" concreteobjects in advance, but extract features from the searched images and then decide on thebasis of a knowledge base, whether the object is contained or not. This knowledge basecan be, for example, a neural network, which must first be trained (Lambers und Lordemann,2003, p. 21). Training pictures are presented to the system for this purpose. The systemcalculates one output for each image. Based on this output and the actual result, the systemis able to adapt and improve its neural net. The "learning success" can be checked by meansof test images (Lambers und Lordemann, 2003, p. 21).

Page 21: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis

The main purpose of the software is to help the user to recognize the plant growth. In order tobe able to carry out this task, this chapter first describes the requirements placed on the soft-ware to be developed. They are divided into functional and non-functional requirements.

3.1. Functional Requirements

The functional requirements are the functions that the prototype should offer. They arederived from the use-case of smart classroom.

Localizing the plant in an environment with changing illuminance value

In the experiment "Plant Growth Analysis", the camera takes a picture of the plant ev-ery six hours. This means, the plant is exposed to different levels of illuminance, which aredarker during the morning and evening times, but brighter during the day. This would causethe image taken from the camera to also darken or brighten up, as shown in figure 3.1 and3.2. Regardless of the light intensity, the software should be able to localize the plant in theimage.

Figure 3.1.: Plant growth analysis inbright environment

Figure 3.2.: Plant growth analysis in darkrnvironment

Page 22: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 22

Calculating plant height

After localizing the plant in the image, the software should also be able to calculate theheight of the plant. The calculation then depends on the position of the camera. For this, acalibration for the camera is necessary.

The expected output from the software

Since the paper scale in the experiment "Plant Growth Analysis" will not be used, adigital scale that represents the plant height should be provided by the software to the user.The calculated plant’s height should also be provided.

3.2. Non-Functional Requirements

The non-functional requirements describe additional properties to the pure functional pos-sibilities of the software. In accordance with Kleuker (Kleuker, 2010, p. 77), non-functionalrequirements are divided into different categories and explained.

The height of the object changes over time

On the use case of the experiment "Plant Growth Analysis", the object is a plant. Thisrequirement is intended to define that the object is not static. As the plant grows over time,its appearance would also differ over time.

Accuracy

The algorithms for analyzing the plant growth should precisely determine the object position,otherwise the calculation of the object’s height will be incorrect. Therefore, the permitteddeviations of the calculated object’s height from the real ones should be minimal.

Page 23: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 23

3.3. Catalogue of Requirements

The requirements described in this chapter are summarized in Table 3.1 and 3.2.

ID Description PriorityA1 Localizing the plant in an environment with changing illuminance value. HighA2 Calculating plant height. HighA3 The expected output from the software: calculated plant height. High

A4The expected output from the software: a scale that represents the

plant height. High

Table 3.1.: Requirements Catalogue: Functional

ID Description PriorityB1 The height of the object changes over time. High

B2The deviation of the calculated plant height from the real plant height

should be minimal. Middle

Table 3.2.: Requirements Catalogue: Non-Functional

3.4. Camera Calibration

In order to be able to estimate the object’s height, the camera needs to be calibrated first.In terms of calibration, there are different purposes. Most of the calibration is intended tocorrect a lens distortion. The occurrence of distortion itself is mainly caused by the lens(Bradski und Kaehler, 2008, p. 375).

In this thesis, the calibration is intended to find the correlation factor of the real objectheight to the object height in the picture. Therefore, the lens distortion mentioned above willnot be discussed further. The calibration is then based on the pinhole camera formula.

Page 24: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 24

3.4.1. Pinhole Camera Model

In this camera model, light is considered as entering from a distant object, but only a singlebeam enters from a certain point, which is also called "aperture" as shown in figure 3.3.

Figure 3.3.: Pinhole Camera Model [Hata und Savarese]

This point is then "projected" onto an image surface. As a result, the image on this projectiveplane is always in focus, and the size of the image relative to the distant object is given by itsfocal length (Bradski und Kaehler, 2008, p. 371).

Figure 3.4.: Pinhole camera mathematical model [Bradski und Kaehler (2008)]

Figure 3.4 describes the mathematical relationship between the coordinates of a point inthree-dimensional space and its projection onto the image plane. As shown in this figure,there are two similar triangles:

Page 25: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 25

�x

f=

X

Z(3.1)

where,

• f, the focal length of the camera

• Z, the distance from camera to object

• X, height of the object

• x, object’s height on the image plane

The equation 3.1 could also be solved after Z as below:

Z =X

�x� f (3.2)

Because the object’s image plane appears upside down, therefore it has negative sign.

3.5. Prerequisite

To reduce the complexity of localizing the object in the image, there are some prerequisitesdefined.

The image’s background is homogenous

In order to reduce the complexity of the segmentation process, which extracts the ob-ject in the image from the background, a homogenous background will be used. Also, ithas been proposed that image segmentation prompts important object, when the pictureis fragmented in homogenous regions (Blaschke u. a., 2005, p. 216). For the color of thebackground, it is important that the object is contrast enough from it, so the plant in theimage could still be localized.

Illumination

In the case that only the geometry of an object is relevant, the illumination should beaimed so that the object can be quickly and precisely separated from the background (Jähneu. a., 1996, p. 175). It must also be assumed that the background is uniformly illuminatedand that there is no strong shadowing.

Page 26: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 26

3.6. Discussion on the Method of Object Detection

In this section the methods from the theory (see section 2) will be discussed based on therequirements above. This discussion will result based on the method to be used.

3.6.1. Template Matching

In template matching, a template image (TI) is going to be matched to the source image(SI). Template matching would be also very error-prone. It would not be robust against sizechanges of the object in the image, against rotations, against changes of the viewing angle,against exposure changes, etc. (Swaroop und Sharma, 2016b, p. 13). On the use-case ofPlant-Growth Analysis, the object is a plant. Because a plant will grow over time, the templatematching approach is not suited for this use-case. Because the TI would at one time not berelevant anymore to the actual object.

3.6.2. Image Segmentation

Segmentation process is useful to remove the background from the foreground. The Thresh-olding method is faster than the adaptive one because it simply converts pixels based on thegiven/pre-defined threshold value without computing it. However, this method could only besuited for specific case. On the use-case of plant growth analysis, the illuminance level inthe environment changes. This will make this approach not robust against the changing ofan illumination levels, as the threshold values are predefined.

While simple thresholding algorithm uses a global threshold for all pixels, adaptive thresh-olding algorithm changes the threshold during the process. To do this, the mean or medianvalue is calculated in a sufficiently large area around the considered pixel. This value is thenused to adjust the threshold (Peterwitz, 2006, p. 4).

This can be useful if there are strong differences in illumination or reflection gradients in theimage (Bradski und Kaehler, 2008, p. 139). Depending on the lighting situation, the thresholdvalue adapts to the conditions. The resulting image is binary and the illumination differenceshave been compensated by dynamically adjusting the threshold (Peterwitz, 2006, p. 4). Theadaptive thresholding method is then suitable for this use-case, as the threshold values arenot predefined and dynamically determined. For this, there are options available on whatmethod to be used. On the Otsu’s Binarization, the image is considered as bimodal (Royu. a., 2014, p. 1321), meaning that there are two peaks at the histogram that could alsorepresent the image of the plant over a homogenous and contrast background.

Page 27: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 27

3.6.3. Feature-Based Method

In comparison to template matching, this approach is based on specific features in imagesthat correlate to objects, such as edge or contour. Edges could represent the object orboundaries, connected edges in turn are called "contours". Contour, on the other hand, de-scribes more than an edge. Object contours are a strong representation of shape. Becauseof this, the shape of the object in the image could be determined and thus its height could bemeasured later on.

3.6.4. Histogram

As stated in the requirement A1, the software should still be able to find the object in a darkenvironment. The key point here is the pixel value. The appearance-based method, whichis, for example, histogram-based, is suitable for this requirement because it uses the non-geometric features such as color or reflectance as its main factor. However, this approach ishighly dependent on lighting conditions. If the illumination changes, the color also changes.By using these characteristics as basis, the disadvantages of model-based method couldbe avoided, such as segmentation errors and loss of information due to the limitation togeometric characteristics (Lambers und Lordemann, 2003, p. 15).

3.6.5. Machine Learning

In the work of Reul (2015), it is shown that, the result of segmentation using K-Mean cluster-ing on homogenous background is almost identical to Otsu’s binarization (Reul, 2015, p. 48).Also, machine learning is often used to classify or recognize an object. By providing an im-age, the goal is then to recognize what kind of object it is, or how to classify the object. Onthe use-case of plant growth recognition, the goal is to measure the height of the plant. Itdoes not require to classify or recognize the plant. Besides, by using the machine learningapproach to localize the plant in the image, it is required to provide a collection of data orcollection of images. In the machine learning approach, it is required to learn the model ofthe object. This could cost time to learn. Also, there are various kinds of plants, meaningthat there are various models of plant to be learned. Considering the available time for thisthesis, this approach is not really suitable.

Page 28: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

3. Analysis 28

3.7. Decision

Based on the requirement B1, the template matching approach is not suited for the use-caseof the plant growth analysis. Therefore, it is not going to be used for the software.

For the segmentation, the adaptive threshold method will be used to determine the thresholdvalue; in this case, by using the Otsu’s binarization method. While a global threshold valueis suited for particular cases or environment (lighting situation) only, the dynamic thresholdvalue adapts to the conditions, depending on the lighting situation.

Contour, on the other hand, is a strong representation of object’s shape. Because of this, theshape of the object in the image could be determined and thus its height could be measuredlater on, which coincides with the use-case of plant growth analysis.

The histogram-based method is fast and does not require segmentation process. However,it is highly dependent on illumination. But this approach could be used as a redundantapproach in case of dark environment, where the image could sometimes not be segmentedproperly.

In the work of Reul (2015), it is shown that the segmentation could also be achieved byusing clustering algorithm, such as K-Means. The purpose of using K-Means is to clusterpixels that belong to background or foreground. The object in that paper is a leaf with ahomogenous background. It is later shown that the result of segmentation using K-Means isalmost identical with the result using Otsu’s binarization method. It means that for a simplesegmentation of an image with homogenous background, a proper result could be achievedwithout using machine learning.

Besides, machine learning is mostly used for classification or recognition of the object in theimage. It could be useful if the goal is to specify what kind of plant is in the image. As for theuse-case of plant growth analysis, the goal is to find and/or localize the plant in an image,and not to classify the type of the plant.

Page 29: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

4. Concept

In this section, the prototype of the software will be described based on the decision in thesection 3.7. The software will use OpenCV as the image-processing library. OpenCV (OpenSource Computer Vision Library) is the largest and most widely used open source softwarelibrary for computer vision and machine learning. It now has more than 2.500 optimizedalgorithms, such as the conversion of images into different color spaces or edge detection, aswell as complex algorithms for facial recognition (Reul, 2015, p. 20). OpenCV was originallyprogrammed in C++ and has interfaces to C, Python, Java and even Matlab. This library issupported under various operating system such as Windows, Linux, Android and Mac OS.Furthermore, the project homepage1 contains a detailed documentation, which facilitates thehandling of the library.

The software algorithm of plant growth analysis is shown below as a flowchart in figure 4.1.

Figure 4.1.: Flow chart of algorithm for plant growth analysis

1 http//www.opencv.org, last checked November 1, 2019

Page 30: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

4. Concept 30

Input Picture

Figure 4.2 shows the set up for detecting the plant. A plant will be located in front of ahomogenous background. It is important that the background does not reflect as it is couldcreate a noise in a given picture. In the background, there is a rectangular reference object.This object will be used as a reference for the height calculation later. Then, a camera willtake a picture of the plant and the background.

Figure 4.2.: Concept of experimental setup

Apply Filter

The operation of filtering, which is also called "smoothing", is frequently used in image-processing. The main purpose of this operation is to reduce noise in the image (Bradskiund Kaehler, 2008, p. 109). Also, later on in the edge detection step, it is done in grayscaleimage. Because of it, blurring the image is mostly helpful to reduce noise. Then, the "notimportant edge" will be reduced (Reul, 2015, p. 12). There are various kinds of filter operator,e.g., the Gaussian Filter or the Bilateral Filter.

Filter operation using the Gaussian Filter is done by convolving each pixel with the Gaussiankernel and then creating a sum of convoluted pixels into output (Bradski und Kaehler, 2008,p. 111).

Smoothing an image using the bilateral filter could cost a little more processing time thanusing the Gaussian Filter, but it provides a means of smoothing while the edge. The Bilateralfilter is also known for being "edge-preserve-smoothing" (Bradski und Kaehler, 2008, p. 113).Thus, the input image will be first filtered using bilateral filter to reduce noise.

Page 31: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

4. Concept 31

Create absolute Difference

Then, the filtered image will be subtracted with the unfiltered image or input image be-fore, by using the absolute difference operation cv2.absdiff(). This step is intended toremove any shadow that appears in the input image.

Image Segmentation

In this process, the image will be segmented, which is separating foreground from thebackground. By using otsu’s method, the threshold value will be dynamically determined.This will be an advantage for a changing environment, e.g change of illuminance level, asthe threshold value will be adapted automatically.

Detect Edge and Contour

The important part is to detect the contour of the foreground/object, as the contour rep-resent the shape of the object. OpenCV provides a variety of functions and methods thatmake working with contours much easier (Reul, 2015, p. 15). Certain contour properties,such as their length and area, can be easily calculated in pixels.

After finding the contour on the image, its form will then be approximated. The purpose ofthis is to distinguish between the reference object and the plant.

Figure 4.3.: Example of contour of refer-ence object Figure 4.4.: Example of contour of plant

Figure 4.3 and 4.4 show the difference between the contour of plant and the referenceobject. The contour is shown in the form of a red line or curve. The contour of the plantis not straight or is a curve and the contour of reference object is straight. By calculatingthe arc length of the contour, the form of the contour could be approximated using theOpenCV-feature approxPolyDP(). It approximates the shape of contour to another shape

Page 32: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

4. Concept 32

with fewer vertices (corner point) depending on the given precision (Bradski und Kaehler,2008, p. 245). By approximating the shape of the contour, the amount of the vertices couldbe retrieved and thus the reference object could be distinguished from the plant.

Camera Calibration

For the calibration, the camera takes a picture of the reference object only, as shown infigure 4.5. For this, a rectangular object should be used. The purpose of the calibration is tocalculate the focal length of the camera.

Figure 4.5.: Reference object

The focal length is determined by solving the equation 3.1 (see section 3.4.1) to focal length(f).

f =x

X� Z (4.1)

where,

• Z, the distance from camera to object

• X, height of the object

• x, object’s image on the image plane

By first calculating the focal length of the camera, the object height could be later calculated.The calculation is shown as equation 4.2 below:

X =Z

x� f (4.2)

Page 33: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

4. Concept 33

Calculate Plant Height

The initial idea of calculating the object height, which is in this case a plant, is by usingthe focal length of camera as the basis. This means that the calculation of the plant heightalways depends on the position of the camera. Thus, whenever the position of the camera ischanged, the new position should be provided and the camera should be calibrated again,or else the calculation of the plant height is not accurate or less accurate. This means thatthe accuracy of the plant height is highly dependent on the provided distance to the camera,which is a variable for the user. Meaning that the distance between the camera and the planthas to be manually measured.

Because of this, a new method to calculate the plant was used, which neither depends on thecamera nor on the position of the camera. The new method is based on the amount of pixelsthat represents one cm regarding the reference object. However, this method requires areference object that should always be present on the image in order to calculate the heightof a plant in the image. Thus, it is first necessary to calculate the amount of pixels thatrepresent one cm of the reference object’s scale.

On the step of contour detection, the reference object’s height in pixel could be retrieved.After that, the proportionality between one cm and the amounts of pixel could be calculated.This proportionality will be used as the basis for the calculation of the plant’s height.

Generate Scale

To represent the height of the plant, a scale will be generated. The scale itself is alsobased on the amount of pixels that represents one cm of the reference object’s scale.

Page 34: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation

In this chapter, the prototype implementation will be described. The first part of this sectionis about the software algorithm for the plant growth analysis. The second part of this sectionis about the web interface, in which the software is accessible. Because there is a differenceon the implementation regarding the calculation of the plant’s height, a new flow chart ofalgorithm for plant growth analysis is shown in figure 5.1.

Figure 5.1.: Flow chart of implemented algorithm for plant growth analysis

Page 35: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 35

5.1. Plant Growth Analysis

The algorithm of plant growth analysis is implemented as shown in figure 4.1. The softwareis implemented in Python and using OpenCV as an image-processing library. The installedOpenCV version is 2.4.9.1. The software is written in Python 2.7.

The software begins by taking a picture of an object, the height of which is to be calculated,which is in this case a plant. The plant is positioned in front of a homogenous backgroundas shown in figure 5.2. In this case, the color of the background is white. In front of the plant,the camera is positioned with a certain distance.

Figure 5.2.: Experimental setup

To reduce noise, the input image will first be filtered using the bilateral filter with gauss kernelof 9. However, the image still contains shadow as shown in figure 5.3,e.g., the silhouettebehind the vase.

Figure 5.3.: Filtered input image with shadow

Page 36: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 36

Then, the filtered image is subtracted with the input image, resulting in an absolute differentimage. Figure 5.4 shows the result of this operation. Then, the image is segmented usingOtsu’s binarization method. The segmentation process removed the shadows and result in acleaner foreground’s appearance. Figure 5.5 shows the result of segmentation process. Theimage in this figure has been inverted, so that the result could be recognized.

Figure 5.4.: Result of absolute differencebetween input image and fil-tered image.

Figure 5.5.: Result of image segmenta-tion using Otsu’s Binarization

The result of the segmentation process will be processed again to detect the edge usingthe Canny edge detector. For this process, two threshold values are needed for the step"hysteresis thresholding". But because the Otsu’s binarization is done first before detectingthe edge, the image is a binary image. It means that the image pixel is either 0 or 255.Because of this, the threshold values for the Canny operation do not really matter. Therefore,they are set to zero for the lower threshold and 255 for the upper threshold. Figure 5.6 showsthe result of edge detection.

Figure 5.6.: Result of edge detection using canny method

After detecting edges, the image will be further processed to close the gaps using dilation.Dilation is used to finding connected components in the image, e.g., for contour detection.

Page 37: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 37

It ensures that a large region that belongs together is re-connected if it has previously beendivided by shadows or noise (Reul, 2015, p. 17). Figure 5.7 shows the result of dilationprocess.

Figure 5.7.: Result of dilation

Then the contour in the image will be searched using the OpenCV function cv2.findContour().This function requires three parameters which are: input image, retrieval method, and ap-proximation method. The result of dilation process will be used as the input image. Theretrieval method is set for RETR_EXTERNAL, meaning that only external contour or the out-ermost contour will be retrieved. The function cv2.findContour() returns three values of whichone is a Python-list of all the contours in the image. Each individual contour is an array of(x,y) coordinates of boundary points of an object (OpenCV). Then, all founded contours aresorted from the largest to the smallest, and only five largest contour will be taken. The redline in figure 5.8 indicates the found contours on the image.

Figure 5.8.: All found contours in image

Then, the shape of each contour is approximated using the function cv2.approxPolyDP(). Ifthe contour has four points, then it is the reference object.

Page 38: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 38

If the contour does not have four points, then it belongs to the plant. In this case, a rect-angle is bounded around the found object that represents the contour using the functioncv2.boundingRect(). The results of this function are returned as four values, top-left x andy coordinate of the rectangle, and the width and the height of the rectangle. Then using thefunction cv2.rectangle(), the current contour could be shown by drawing a rectangle aroundits point. Figure 5.9 shows the found contour of the plant.

Figure 5.9.: Bounding rectangle around all found contour

The target is to find the outermost contour of the plant, which means the maximum pointor the minimum point of the contour. This was done by sorting all x-points and y-pointsand getting the minimum values. These minimum values are the starting point (x,y) of therectangle. For the maximum value or the end point of the rectangle, the values of x and ymust be summed up with its dimensions width and length. Then, these values are sorted toget the maximum value. Figure 5.10 shows bounded rectangle around the outermost contourof plant.

Figure 5.10.: Rectangle around the outermost contour of the plant

By sorting the contours to get the desired points of the plant, the height of the plant in pixel

Page 39: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 39

can then be retrieved. The calculation of the plant’s height will be described in the nextsection 5.1.1.

5.1.1. Calculate Plant Height

In order to automatically calculate the height of the plant, which depends on the position ofthe camera, the reference object should also appear in the image as shown in figure 5.11.

Figure 5.11.: Plant with reference object

Then using the image from figure 5.11 as an input„ the image is processed as has alreadybeen described before in section 5.1. . To distinguish, whether the object is a referenceobject or a plant, the shape of the contour in the image will be approximated. The resultof this operation is as shown in figure 5.12. The found reference object is marked usingblue rectangle where the plant is marked using a green rectangle. The red line indicates theapproximated shape of the contour. Because the shape of the reference object is rectangular,the approximated shape of its contour should also be rectangular, which has four points. Thisway, the reference object is defined as contour with rectangular shape.

Figure 5.12.: Result of Plant Growth Analysis Software

Page 40: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 40

The reference object used in the figure has a dimension of 8.1cm x 11.0cm (width x height).The height of the reference object will be used as a reference point for the calculation of plantheight later on.

The basis for calculating of the plant height is the amount of pixels, that represents one cmof the scale of the reference object. First, the dimension (height) of input image in cm will bedetermined using the equation 5.1.

IMcm =ROcm

ROpx

� IMpx (5.1)

where,

• IMcm, image height in cm

• ROcm, reference object height in cm

• ROpx , reference object height in pixel

• IMpx , image height in pixel

The image height in pixel (IMpx ) could be determined by accessing the image propertiesusing image.shape function. The first two return values of this function are the dimensions ofthe image in pixel, which are height and width respectively. To calculate the amount of pixelsthat represent one cm (pixel1cm), the image height in pixel will be divided by its dimensionin cm or the result from equation 5.1.

pixel1cm =IMpx

IMcm

(5.2)

In figure 5.12 the real height of the plant is approximately 22 cm including the vase. The vaseitself has the height of 9 cm. As the result, the plant height including the vase in pixel is 294.The amount of pixels that represents one cm of reference object’s scale is 14. This means,every 14 pixels is equal to 1 cm of reference object’s scale. It results in the calculated plant’sheight of 21 cm. The calculated plant height has a deviation of 1 cm.

The above described approach to calculate the plant’s height is not dependent on the focallength of the camera. Therefore, the calibration of the camera is not necessary. Because, thecalculation is based on the proportionality between one cm of the reference object and theamount of pixels. This means that, the plant’s height could be calculated without calibratingthe camera.

Page 41: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 41

5.1.2. Plotting the Scale

Another way to measure the height of the plant is by to use a scale to approximate the heightof the plant. The scale itself is based on the height of the reference object. Because thedimensions of the reference object are known either in cm or px, it could be used to calculatethe dimension of the image in cm. This was done to determine the maximum value for thescale in cm. The calculation for determining the image height in cm (IMcm) is done usingthe equation 5.1.

Then, the image height in cm is equivalent to the maximum value for the scale. After calculat-ing the maximum value for the scale, it is then necessary to calculate the distance betweeneach scaling unit of the scale. The scaling unit is illustrated in figure 5.13. In this prototype,the distance between the scaling unit was defined to be "one cm" (1 cm). Thus, "one cm"of scaling unit in the scale represents one cm of the actual height of the reference object (oralso the plant) in reality.

Figure 5.13.: Illustration of scaling unit for the scale

The equation 5.2 was used to calculate the amount of pixels for every one cm (1 cm). Then,the software draws a vertical line for the vertical axis of the scale using the OpenCV’s drawingfunction cv2.line(). Starting from the bottom of the image, the scaling unit is drawn. Bysubtracting the current position of the scaling unit with the amount of pixels for one cm, thenext position for the scaling unit is obtained.

Page 42: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 42

It should be noted, that an image has the origin of the coordinate (0,0) on the top-left corner.The end point of the coordinate is located on the bottom-right of the image. The subtractionof the current position of the scaling unit is done as long as the position has a positive value,which means greater or equal to zero. As the a result, the scale is shown in the user’sinterface and it is located besides the input image, as shown in figure 5.14.

Figure 5.14.: Plant with scale

5.2. Web Interface

The software for plant growth analysis is accessible using through a web application. Theweb application is in the form of a web-server, which is running on the localhost on port 3000of the Raspberry Pi, which is the main control unit. The web-server is accessible throughRaspberry Pi’s hotspot.

The web application consists of two parts, namely the backend (web-server) and frontend(HTML). The backend part is implemented in Python using web framework Flask1.

The frontend part is a HTML, which is the main user’s interface. The frontend allows userinteraction, e.g., to take a picture or start the software for plant growth analysis.

For the first step or initial start of using the prototype, there will be no images stored in theRaspberry Pi. This makes the input image, output image, and scale shown as a corruptedfile. The button "take picture" is used to take a picture, by calling a defined address URL for

1 https://palletsprojects.com/p/flask/, last checked November 1, 2019

Page 43: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 43

the camera. The taken picture will be stored in a folder called "/static/image". After the "takepicture" process is finished, the server loads the taken picture and passes it on to the HTML.

Initial Use of the Prototype

Before using the software for plant growth analysis, the dimensions of the reference objectmust be provided first by entering the dimension into the given part of the HTML, as shownin figure 5.15.

Figure 5.15.: Frontend: Reference object section

Then, the software for calculating the plant height could be used, starting by taking the thepicture of the plant using the "take picture" button. Figure 5.16 shows the software section onthe user interface. After taking a picture, the illuminance level in the image is also providedon the user interface.

Figure 5.16.: Frontend: Software section

Page 44: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

5. Implementation 44

The button "calculate plant height" serves to calculate the height of the plant on a one-timebasis. The result is only shown in the user interface and not to be saved. To calculatethe plant height regularly, the button "start scheduling" could be used. It will execute thesoftware for plant growth analysis regularly using Cron2, based on the provided time intervalin hour/hours.

The software for plant growth analysis will be run in the background, so it does not block thewhole process of the web-server. All the images will be stored in a folder called "results". Theresults of the calculated plant’s height will be saved as a comma-separated file (.csv) insidethe folder "results". For now, the scheduled measurement could be stopped by pressing thebutton "stop scheduling". It will remove the added task in the Cron. The images and the.csv file could be downloaded by pressing the button "download result file". Before adding anew scheduled measurement, the result files should be deleted by using the button "deleteresult" to prevent mixed results from the previous scheduled measurement.

2Cron is a time-based scheduler in Unix computer operating system [IEEE and The Open Group]

Page 45: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation

In this chapter, the results of the software plant growth analysis will be discussed. Two kindsof tests were carried out. First, the software algorithm for plant growth analysis was testedunder different illuminance levels. The second test was then used to apply the software in theactual use-case of plant growth analysis, which was about regularly measuring the height ofa plant.

For the illuminance measurement, the sensor Fibaro FGSM0001 was used. The sensor wasread via openHAB1.

6.1. Result: Software Algorithm - Illuminance Level

In this section, the result of the test in the software algorithm for plant growth analysis underdifferent illuminance levels is presented.

A rectangular object that was located next to the plant was used as a reference object.The dimension of the reference object was 8.1cm x 11cm (width x height) and its color wasblack. This color had been chosen because it was in contrast with the homogenous whitebackground. The result of the test is shown in table 6.1 below:

No. Illuminance [lux] Localized?1 146 yes2 19 yes3 9 yes4 4 no

Table 6.1.: Result of the test under different illuminance levels

1openHAB is an open source home automation software platform [openHAB Foundation e.V]

Page 46: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 46

6.1.1. Evaluation

Test-1

The image in test-1 was taken under illuminance level of 146 lux. This was achievedby directly pointing a flashlight to the plant/reference object. In the image, there appearedshadows, which were located behind the plant and below the reference object, as shownin figure 6.1. After the shadow-removing process and segmentation, the shadow below thereference object was successfully segmented as background. However, the shadow behindthe plant could not be successfully removed. Because of the high illuminance level andthe direction of the light that directly pointed to the foregrounds, a strong shadow appearedbehind the plant. This shadow was so in contrast from the background that it would be seenas foreground in the segmentation process. The result of the segmentation is shown infigure 6.2.

Figure 6.1.: Result-1: outputFigure 6.2.: Result-1: Segmentation

result

In this case, the plant and the reference object were correct localized by the software. It isshown using a blue rectangle, that fully covered the reference object and the green rectangle,that fully covered the height of the plant. The width of this green rectangle was wider than theplant, because it also covered the contour of the table that previously not correctly segmentedas background. Neverthless, the height of the plant was fully covered with the rectangle.Therefore, the result of localizing the plant was denoted as correct.

Page 47: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 47

Test-2

In test-2, the image was taken with a lower illuminance level compared to the imagetaken from test-1, which was 15 lux using the room light. Figure 6.3 shows the image thatalso contained shadows. The shadows were once again located behind the plant and belowthe reference object. But they appeared smoother compared to the shadows in the imagetaken in test-1. Both shadows on this image were successfully removed and segmented, asshown in figure 6.4. In this case, the plant was successfully localized and it was indicated bya green rectangle that fully covered the plant.

Figure 6.3.: Result-2: outputFigure 6.4.: Result-2: Segmentation

result

Test-3

The image in test-3 was taken in a dark environment with an illuminance level of 9 lux.The difference between the other images from previous tests was that, the segmentationresult also contained noises that were not successfully segmented. Figure 6.6 shows theresult of the segmentation. In this figure, there appeared white dots on the left side of thereference object, which were not successfully segmented.

Page 48: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 48

Figure 6.5.: Result-3: OutputFigure 6.6.: Result-3: Segmentation

result

Figure 6.7 shows the result of dilation after detecting the edges. This image was to beprocessed further to detect the contour and only five largest contours were to be furtherinspected. This would discard the dots, that were located on the leftmost of the referenceobject.

Figure 6.7.: Result-3: Result of dilation

In this example of illumination, the software could still find the foreground objects. However,there was the case where the software could not localize the foreground objects and thereforecould not calculate the plant’s height. Nevertheless, the plant and the reference object werelocalized, shown with the rectangle, as shown in figure 6.5.

Page 49: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 49

6.1.2. Problem

A problem in localizing the plant started to arise in an environment with a low illuminancelevel and high illuminance level with strong shadowing. The main problem was that, thesoftware could not find the foreground objects and notably, mostly the reference object.Therefore, the calculation of the plant’s height could not be done.

Test-4

Figure 6.8 shows an example of an error case when detecting the plant in a dark envi-ronment with illuminance level of 3 lux. In this image, there still appeared shadows; forexample, below the reference object and around the vase. After shadow removing pro-cess was done, the image would then be segmented. However, there were noises on thesegmented image, as shown in figure 6.9.

Figure 6.8.: Result-4: OutputFigure 6.9.: Result-4: Segmentation

result

The result of the segmentation will be further processed, e.g., with dilation and contour de-tection. Figure 6.10 shows the contour on the image. The red lines illustrate the contourson the image. Because of the high noise, the contour of the reference object was not per-fectly rectangle; the reference object could not be found. This led to the fact that the heightof the plant could also not be calculated either. Moreover, the plant on the image was notsuccessfully found because of the high noise.

Page 50: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 50

Figure 6.10.: Result-4: Contour

Another case of Test-1

Test-1 has shown, that the foreground objects were successfully localized in high illumi-nance level, but in this case, the foreground objects were not able to be localized. Thedifference between this case and the image from test-1 lies in the fact that there appearedshadow behind the reference object as shown in figure 6.11. This phenomenon was causedby the difference in lighting orientation on the objects, which resulted in the aforementionedshadow.

Figure 6.11.: Error case: High illumi-nance with shadowing

Figure 6.12.: Error case: High illumi-nance - segmentation result

Because the shadow was contrast enough from the background, it was segmented as fore-ground. Figure 6.12 shows the result of the segmentation. It turned out that, the contour ofthis shadow was detected as an object. This made the approximated contour of the refer-ence object not rectangular anymore. Thus, the reference object could not be found. Figure

Page 51: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 51

6.13 shows the contour of the foreground objects, which is indicated by the red line. Thewhite line indicated the approximated contour of the foreground objects.

Figure 6.13.: Error case: High illuminance - contour

Page 52: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 52

6.2. Result: Experiment Plant Growth Analysis

The second test was intended to put the software to the test for the actual use-case ofthe experiment, "Plant Growth Analysis". The second test was also using a plant with areference object with the dimension of 8.1 cm x 11 cm (width x height). This test was doneonly in a bright environment with a high illuminance level. Also, the plant was watered every24 hours. Then, the camera took a picture every 24 hours for 10 days at the same time, sothe illuminance level stayed more or less the same.

On the day-0 of the test, the total plant’s height including the vase was around 22 cm, wherethe vase itself had a height of 9 cm and the plant’s height was around 13 cm.

The result of the second experiment is presented below in table 6.2:

DayCalculatedHeight [cm] Illuminance [lux]

Plant RealHeight [cm]

Deviation[cm] or [%]

0 21.6 20 22 0.4 or 1.821 21.5 27 22 0.5 or 2.722 21.5 29 22 0.5 or 2.723 21.6 32 22.3 0.7 or 3.144 21.8 34 22.3 0.5 or 2.245 21.8 29 22.3 0.5 or 2.246 21,7 31 22.3 0.8 or 3.567 21,7 31 22.5 0.8 or 3.568 21.8 28 22.5 0.7 or 3.119 21.8 28 22.5 0.7 or 3.1110 21.7 27 22.5 0.8 or 3.56

10 - extra 22.5 27 24 1.5 or 6.25

Table 6.2.: Result of the plant growth test

The deviation (d) in cm was calculated by subtracting the result of the plant height, whichwas calculated by the software, with the real plant height as shown in equation 6.1.

d = hcalc � hreal (6.1)

Page 53: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 53

The deviation (d) could also be calculated in percent as follows:

d =hcalc

hreal� 100% (6.2)

where,

• d , deviation in cm or in %

• hcalc , calculated plant height in cm

• hreal , real plant height in cm

The above calculated plant’s height included the height of the vase. The average deviationfrom day-0 until day-10 was 0.61 cm or 2.73%. In this test, all images taken will be named as"imageInput" followed with the time the image was taken. For the output image and the scale,the file will be named "imageOutput" and "scale" respectively and also followed with the timethe "imageInput" was taken. All these images were stored in a folder and the result of thecalculated plant height was stored in a comma-separated value s file (.csv), with respectedimage’s name and including the illuminance value. This file could be downloaded on thesoftware section in the user’s interface.

The result from this test could be seen as image in appendix section, see section A.

6.2.1. Evaluation

On the plant growth test, the plant grew from approximately 22 cm to 22.5 cm within ten days.During the ten days of the test, the height of the plant was also measured manually using aruler. On the day-3 of the test, the real height of the plant was around 22.3 cm. The planthad grown around 0.3 cm from the previous day, day-2. Unfortunately, the software couldnot measure this 0.3 cm. Thus, the growth process between day-2 and day-3 could not berecognized by the software. However, the growth process of the plant could be recognizedon day-4; the calculated plant height was 21.8 cm. Thus, during the first four days of the test,the growth process of the plant could be recognized using the prototype.

The growth process of the plant is shown as a chart in figure 6.14. The orange line indicatesthe real height of the plant. The calculated plant’s height from the software are also shown inthis figure, which is indicated by the gray line. All data are given in cm. The growth processof the plant can be seen by the orange line.

Page 54: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 54

Figure 6.14.: Growth process of the plant in ten days

From day-7 of the test, the plant grew again around 0.2 cm, meaning the height of the plantwas 22.5 cm. The height of the plant stayed the same until the end of the test and thus onday-10. Because of the minimal difference in the calculated plant’s height between day-0and day-10, the growth process of the plant was hardly recognizable by the software.

Because of this, an object was positioned on the plant that made the plant’s height become24 cm. This should simulate the growth process of the plant for around 1.5 cm. The softwarecalculated the plant’s height for 22.5 cm, it deviated around 1.5 cm from the real plant’sheight, which was 24 cm. Now the growth process could be more recognized by using theprototype. With the increase of the plant’s height in reality, the calculated height had alsoincreased.

Figure 6.15.: Simulated growth process of the plant

Page 55: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

6. Result and Evaluation 55

It is shown in figure 6.14 that the plant had been gradually growing in ten days. In ideal case,the growth process of the plant from the software is the same with the orange line. However,this was not the case. The calculated plant height from the software had a deviation from thereal height of the plant. Nevertheless, the growth process of the plant could be recognizedby using the prototype. It was shown by the gray line that it had also been rising graduallyfrom the initial day of the test (day-0).

Page 56: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

7. Summary

Starting from the idea about how to analyze the plant growth process, one solution hasbeen discussed: using an image-processing. The thesis started with research of methodsthat could be used in image-processing for localizing an object in an image. Then, thesemethods were analyzed based on the given requirements and the suitability of the method interms of the use-case of "Plant Growth Analysis". After deciding on the method, a prototypewould then be implemented.

The prototype is a web application, which runs in a Raspberry Pi. The web applicationconsists of two parts, the server (backend) and the user’s interface (frontend), which enablethe interaction between the user and the software for plant growth analysis. The webapplication is accessible within the Raspberry Pi’s hotspot on a specific port (port 3000).The implemented software algorithm consists of various methods of image-processing thatcreate a desired functionality to achieve a certain result, e.g., recognizing a plant’s growthprocess. By using the prototype, the height of an object could be calculated by using anotherrectangular object as a reference.

Two tests were done for the prototype. The first one was intended to test the reliabilityof the software against different levels of illuminance. It was shown that on a low illuminancelevel, noise could not be fully reduced or segmented compared to the image that was takenon a high illuminance level. In an image that was not correctly segmented, the contour of anobject could appear differently. On the other hand, the software algorithm could work effi-ciently or deliver a reliable result in an environment with a high level of illuminance. However,at a high level of illuminance, a shadow could appear stronger than in an image taken with alower level of illuminance. Therefore, there is a step of shadow removal/suppression in thealgorithm of plant growth analysis. The taken image from the camera would be first filteredusing a bilateral filter. Using this filter could not only reduce noise, but also preserve edges.Then, the filtered image was subtracted with the unfiltered image to create an absolutedifference. By doing this, the shadow in the image could be removed.

The second test was done to put the software to the test for the actual use-case of theexperiment "Plant Growth Analysis". The result of this test showed that using the imple-mented software, the growth process of a plant could be recognized with a certain deviation.

Page 57: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

7. Summary 57

This was done by regularly calculating the height of the plant. The result of the calculationwas then stored in a .csv file. The implemented software for calculating the plant’s heightis not dependent on the position of the camera. It is based on the amount of pixels thatrepresents one cm of the reference object’s scale. The plant used in this test grew unexpect-edly slow. Because of this, the growth process during the 10-days of the test could not berecognized. To help simulate the growth process of the plant by around 1.5 cm, a pencil wasattached on the plant. This made the height of the plant that appeared in the image become24 cm. As a result, the changing height of the plant could also be better recognized usingthe prototype.

For this prototype, it is not possible to calculate the height of any rectangular object, asthe software could not distinguish between the reference object and the other object, theheight of which is to be calculated. Therefore, all the tests were done using a plant, whichhad a non-rectangular shape. In addition, the implemented prototype could work effectivelyon at a high level of illuminance level, in order to deliver a reliable result.

Another possibility of for analyzing the plant growth is by using the machine learningmethod. The disadvantage of using machine learning is that, it requires training usingmodels of an object. It also requires enough computer performance to do it. It means, thatevery use-case/object requires a new model to be trained. The advantage of the presentedprototype is, that it is also possible to measure the height of another object other than a plant.But it requires a reference object, and also the current algorithm is sensitive to illuminancelevel and shadowing. At a low level of illuminance, the prototype could work with a low ac-curacy or it could not work properly at all. At a high level of illuminance, the prototype couldwork with better accuracy. It should be noted that in an environment with high illuminance, ashadow could occur in the image and could therefore affect the calculation of the plant height.

Checking requirements

Based on the result from the test on the different illumination levels, it was shown thatthe image at a low level of illuminance tends to have more noise after the segmentationstep. This could lead to an error regarding the calculation of the plant’s height becausethe reference object in the image could not be found. This means that the software coulddeliver a reliable result mostly in a bright environment with a high level of illuminance andthe foreground objects appear bright on the image. In a dark environment with a low level ofilluminance level, the software could not deliver a reliable result.Therefore, the requirement A1 is not always fulfilled, as the prototype could always localizethe foreground objects only in a bright environment, while in a dark environment, the softwarecould not always localize the foreground objects.

Page 58: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

7. Summary 58

As output, the prototype would provide the calculated plant height and also a scale, that willbe shown on the user interface. Also, the result on scheduled measurement will be storedas .csv file on the Raspberry Pi. This means, that the requirements A2, A3, and A4 are alsofulfilled. The average deviation between real plant’s height and calculated plant’s height was0.61 cm or 2.73%. The accuracy of the plant height’s calculation is strongly dependent onthe illuminance and shadowing.

Therefore, a result with minimal deviation could be achieved in a bright environmentwith a high level illuminance, assuming that the image does not have strong shadowing.

ID Description Fulfilled?A1 Localizing the plant in an environment with changing illuminance value. No.A2 Calculating plant height. YesA3 The expected output from the software: calculated plant height. Yes

A4The expected output from the software: a scale that represents the

plant height. Yes

Table 7.1.: Review of functional requirements

Answering the research question

1) Which method of image-processing is suitable for the recognition of plant growth pro-cesses?

The suitability of a software to recognize plant growth processes lies on its ability tolocalize an object. The software itself consists of methods of image-processing, which inthe implemented prototype are mostly intended to be used to localize an object.

This question could be answered based on the result of the performed test, test-1.As for this, some image-processing methods are not used and thus their suitability forthis use-case is unknown. Based on the test result, the Otsu binarization process issuitable. Because it separated the foreground from the homogenous background. Also,using the Canny edge detector, the edge on the image could be extracted. The Cannyedge detector does not only detect the edges, but also connects them as a contour. Asmentioned before, contour is a strong representation of an object’s shape. Because ofthis, the shape of the object in the image could be determined and thus its height couldbe measured later on, which coincides with the use-case of plant growth analysis.

Page 59: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

7. Summary 59

2) How reliable is such a method in recognizing the growth of a plant?

The segmentation process using Otsu binarization is reliable on the use-case of plantgrowth processes because the Otsu binarization dynamically determines the thresholdvalue. This means that the threshold value will automatically adapt to the changingilluminance value.

Once again, this question will be answered using the result from test-1. The resultshows that most of the time, the image was correctly segmented at a high level of illu-minance, leaving the foreground objects behind. As for the low illuminance level, thereappeared many noises in the image and thus the homogenous background could not becorrectly segmented. This would result in the fact that the height of the plant could not becalculated since the reference object could not be found.

Regarding the software algorithm, it could provide a reliable result of the calculatedplant’s height with the average deviation of 0.61 cm or 2.73%. This could be achieved inthe bright environment with a high level of illuminance. In the other case, the softwarecould not always deliver a reliable result at a low level of illuminance.

3) Which criteria are decisive and must be considered for the recognition of plant growthprocesses?

Based on the result of the test-1, the illumination has a big role on recognizing the plantgrowth process. Besides the illumination, the orientation of the light must also be consid-ered, as it could cause shadowing in the image. This shadowing could lead to a wronglocalization of the object, if the shadow is contrasted enough to the homogenous back-ground. At the end, this shadow would be considered as noise in the image, as shown inanother case of test-1 in section Result and Evaluation, section 6.1. Also, for the back-ground, it has to be clean without a texture, as it could be detected as an edge or contourwhich can also lead to a wrong localization of the foreground object.

Page 60: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

7. Summary 60

Prospect of the work

The current algorithm is relying on segmentation and contour detection, which is stronglysensitive on illumination level and shadowing. Therefore, the current algorithm could beoptimized in terms of "localizing" the object in an image, especially in the case with lowilluminance level. The approach feature-based method using histogram could also be usedto precisely determine the location of the object in the image. For example, the pixels aroundthe contour of the object could be further investigated, whether its neighboring pixels aresimilar with the current pixel.

The implemented prototype has a prerequisite, that the color of the background should behomogenous. For this, the complexity for the segmentation process is lower than a seg-mentation on a non-homogenous background. However, another method could be used forsegmentation of a non-homogenous background; for example, segmentation based on thecolor in the image (color segmentation). Thereby, the prerequisite in this case could beavoided.

In the implemented software, a rectangular object, which always has to be present in the im-age, is used as a reference to calculate the height of another object. To distinguish betweenthe reference object and the plant, the shape of the contour will be approximated. Becausethe shape of the reference object is rectangular, the approximated shape of its contour shouldalso be rectangular, which has four points. However, this approach is error-prone. This way,the software will recognize every contour with four points as a reference object, even if itis not the actual reference object. In this case, the software could not localize the actualreference object correctly.

This could be avoided by using the reference object only once to calculate the proportionalitybetween the amount of pixels to one cm. This value would then be saved within the softwareand could be used later to calculate the height of another object. By doing this, the limitationon the software could be omitted, since by using this method, the software would now beable to measure any type of object, regardless of its shape. This also makes the softwaremore robust, since on calculating another object’s height, the contour of the object does notneed to be approximated again.

Page 61: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Bibliography

[Blaschke u. a. 2005] BLASCHKE, Thomas ; BURNETT, Charles ; PEKKARINEN, Anssi: Im-age Segmentation Methods for Object-based Analysis and Classification. S. 211–236, 012005

[Bradski und Kaehler 2008] BRADSKI, Gary ; KAEHLER, Adrian: Learning OpenCV: Com-puter vision with the OpenCV library. " O’Reilly Media, Inc.", 2008

[Burkov 2019] BURKOV, Andriy: Machine Learning kompakt: Alles, was Sie wissenmüssen. MITP Verlags GmbH, 2019 (mitp Professional). – ISBN 9783958459953

[Canny 1986] CANNY, J.: A Computational Approach to Edge Detection. In: IEEE Transac-tions on Pattern Analysis and Machine Intelligence PAMI-8 (1986), Nov, Nr. 6, S. 679–698.– ISSN 0162-8828

[Capgemini-Service-SAS 2017] CAPGEMINI-SERVICE-SAS: Unser iGov Lab Innovationfür den öffentlichen Sektor. (2017). – URL https://www:capgemini:com/de-de/service/unser-igov-lab-innovation-fuer-den-oeffentlichen-sektor/. – Zugriffsdatum: November 1, 2019

[Hata und Savarese ] HATA, Kenji ; SAVARESE, Silvio: CS231A Course Notes 1: CameraModels.

[IEEE and The Open Group ] IEEE AND THE OPEN GROUP: Manualof Crontab. . – URL https://pubs:opengroup:org/onlinepubs/9699919799/utilities/crontab:html. – Zugriffsdatum: November 1, 2019

[Jähne u. a. 1996] JÄHNE, Bernd ; MASSEN, R. ; NICKOLAY, B. ; SCHARFENBERG, H.:Technische Bildverarbeitung - Maschinelles Sehen. Springer, 1996. – URL http://d-nb:info/94569895X. – Zugriffsdatum: November 1, 2019

[Kleuker 2010] KLEUKER, S.: Grundkurs Software-Engineering mit UML. Vieweg+TeubnerVerlag, 2010 (Grundkurs Software-Engineering mit UML / Kleuker, Stephan). – ISBN9783834898432

Page 62: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Bibliography 62

[Lambers und Lordemann 2003] LAMBERS, Martin ; LORDEMANN, Chris-tian G.: Objekterkennung in Bilddaten. 2003. – URL https://www:uni-muenster:de/Informatik/u/lammers/EDU/ws03/Landminen/Abgaben/Gruppe9/Thema09-ObjekterkennungInBilddaten-ChristianGrosseLordemann-MartinLambers:pdf. – Zugriffsdatum: Novem-ber 1, 2019

[Moeslund 2012] MOESLUND, Thomas B.: Introduction to video and image processing:Building real systems and applications. Springer, 2012. – ISBN 978-1-4471-2502-0

[OpenCV ] OPENCV: OpenCV Contour Tutorial. . – URL https://docs:opencv:org/3:1:0/d4/d73/tutorial_py_contours_begin:html. –Zugriffsdatum: November 1, 2019

[opencv.org ] OPENCV.ORG: example of thresholding.– URL https://docs:opencv:org/2:4/_images/Threshold_Tutorial_Theory_Example:jpg. – Zugriffsdatum: November1, 2019

[openHAB Foundation e.V ] OPENHAB FOUNDATION E.V: OpenHAB Documentation. . –URL https://www:openhab:org/docs/. – Zugriffsdatum: November 1, 2019

[Peterwitz 2006] PETERWITZ, Julia: Grundlagen: Bildverarbeitung / Objekterken-nung. Technische Universität München - Fakultät für Informatik, Seminar The-sis. 2006. – URL http://www9:in:tum:de/seminare/hs:SS06:EAMA/material/01_ausarbeitung:pdf. – Zugriffsdatum: November 1, 2019

[Reul 2013] REUL, Christian: Implementierung und Evaluierung einer Objekterken-nung für einen Quadrocopter. 2013. – URL http://www:is:informatik:uni-wuerzburg:de/fileadmin/10030600/Mitarbeiter/Reul_Christian/Objekterkennung_Reul_Christian_BA:pdf. – Zugriffsdatum: November 1,2019

[Reul 2015] REUL, Christian: Evaluation von Methoden zur Bildverarbeitung für Ob-jekterkennung am Beispiel der Klassifikation von Bäumen, Universität Würzburg Insitutfüt Informatik, Master Thesis, 2015. – URL http://www:is:informatik:uni-wuerzburg:de/fileadmin/10030600/Mitarbeiter/Reul_Christian/Baumklassifikation_Reul_Christian_MA:pdf. – Zugriffsdatum: November1, 2019

[Roy u. a. 2014] ROY, P. ; DUTTA, S. ; DEY, N. ; DEY, G. ; CHAKRABORTY, S. ; RAY, R.:Adaptive thresholding: A comparative study. In: 2014 International Conference on Control,Instrumentation, Communication and Computational Technologies (ICCICCT), July 2014,S. 1182–1186

Page 63: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Bibliography 63

[Schlecht und Ommer 2011] SCHLECHT, Joseph ; OMMER, Björn: Contour-basedobject detection. In: Proceedings of the British Machine Vision Conference, BMVAPress, 2011, S. 50.1–50.9. – URL https://hciweb:iwr:uni-heidelberg:de/sites/default/files/node/files/686139654/BMVC11:pdf. – Zugriffs-datum: November 1, 2019. – ISBN 1-901725-43-X

[Schwenzer 2015] SCHWENZER, Max: Modellbasierte Objekterkennung für einen Indus-trieroboter. (2015)

[Sezgin und Sankur 2004] SEZGIN, M. ; SANKUR, Bulent: Survey over image thresholdingtechniques and quantitative performance evaluation. In: Journal of Electronic Imaging 13(2004), 01, S. 146–168

[Swaroop und Sharma 2016a] SWAROOP, Paridhi ; SHARMA, Neelam: An Overview ofVarious Template Matching Methodologies in Image Processing. In: International Journalof Computer Applications 153 (2016), 11, S. 8–14

[Swaroop und Sharma 2016b] SWAROOP, Paridhi ; SHARMA, Neelam: An overview ofvarious template matching methodologies in image processing. In: International Journalof Computer Applications 153 (2016), Nr. 10, S. 8–14

[Yuheng und Hao 2017] YUHENG, Song ; HAO, Yan: Image Segmentation AlgorithmsOverview. In: CoRR abs/1707.02051 (2017). – URL http://arxiv:org/abs/1707:02051. – Zugriffsdatum: November 1, 2019

Page 64: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

A. Result of Plant Growth

Figure A.1.: Plant growth: Day 0 Figure A.2.: Plant growth: Day 1

Figure A.3.: Plant growth: Day 2 Figure A.4.: Plant growth: Day 3

Page 65: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

A. Result of Plant Growth 65

Figure A.5.: Plant growth: Day 4 Figure A.6.: Plant growth: Day 5

Figure A.7.: Plant growth: Day 6 Figure A.8.: Plant growth: Day 7

Figure A.9.: Plant growth: Day 8 Figure A.10.: Plant growth: Day 9

Page 66: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

A. Result of Plant Growth 66

Figure A.11.: Plant growth: Day 10

Figure A.12.: Plant growth: Day 10 - extra

Page 67: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

B. Python Code: Calculate Plant Height

1 impor t cv22 impor t numpy as np3

4 def c a l c u l a t e _ p l a n t _ h e i g h t (VASE_HEIGHT_CM, REFERENCE_OBJECT_HEIGHT_CM) :5

6 # Read i n the image7 image = cv2 . imread ( " / home / p i / app / s t a t i c / images / imageInput . png " )8 i f image i s None :9 p r i n t ( ’ Could not open or f i n d the image ’ )

10 e x i t ( 0 )11

12 image_height_px , image_width_px , image_channels = image . shape13

14 d i l a t e 1 = cv2 . d i l a t e ( image , None , i t e r a t i o n s =1)15 bg_img = cv2 . b i l a t e r a l F i l t e r ( d i l a t e 1 , 9 , 50 , 50)16 d i f f _ i m g = cv2 . a b s d i f f ( image , bg_img )17 cv2 . imwr i te ( ’ / home / p i / app / s t a t i c / images / d i f f _ i m g . png ’ , d i f f _ i m g )18 # Cover to grayscale and b l u r19 greyscale = cv2 . cv tCo lo r ( d i f f _ img , cv2 .COLOR_BGR2GRAY)20 re t3 , th3 = cv2 . th resho ld ( greyscale , 0 , 255 , cv2 .THRESH_BINARY + cv2 .THRESH_OTSU)21 cv2 . imwr i te ( ’ / home / p i / app / s t a t i c / images / th3 . png ’ , th3 )22 # Detect edges and close gaps23 canny_output = cv2 . Canny ( th3 , 0 , 255)24 d i l a t e = cv2 . d i l a t e ( canny_output , None , i t e r a t i o n s =1)25

26 # Get the contours o f the shapes27 contours , h ie ra rchy = cv2 . f indContours ( d i l a t e . copy ( ) , cv2 .RETR_EXTERNAL, cv2 .CHAIN_APPROX_SIMPLE)28

29 x_coor = [ ]30 y_coor = [ ]31 x_withDimension = [ ]32 y_withDimension = [ ]33 re f_ob jec t_he igh t_px = 034

35 # Sorted contour by area from the l a r g e s t to smal les t , take only f i v e l a r g e s t area36 sor ted_contours = sor ted ( contours , key=cv2 . contourArea , reverse=True ) [ : 5 ]37

38 re fe rence_ob jec t_he igh t_px = 039

40 f o r cnt i n sor ted_contours :41

42 area = cv2 . contourArea ( cnt )43

44 # Ca lcu la te arch leng th o f the cu r ren t contour45 p e r i = cv2 . arcLength ( cnt , True )46 eps i l on = 0.02 ∗ p e r i47

48 # approximate i t s po lygonal curve49 approx = cv2 . approxPolyDP ( cnt , eps i lon , True )

Page 68: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

B. Python Code: Calculate Plant Height 68

50

51 #cv2 . drawContours ( image , [ cn t ] , �1, (0 , 0 , 255) , 1)52 #cv2 . drawContours ( image , [ approx ] , �1, (255 , 255 , 255) , 1)53

54 # i f approximated contour has fou r po in ts ,55 # then we can assume t h a t we have found the re ference ob jec t56 i f area > 100 and len ( approx ) == 4:57 x_ro , y_ro , w_ro , h_ro = cv2 . boundingRect ( cn t )58 cv2 . rec tang le ( image , ( x_ro , y_ro ) , ( x_ro + w_ro , y_ro + h_ro ) , (255 , 0 , 0) , 1)59 re fe rence_ob jec t_he igh t_px = h_ro60

61 else :62 x , y , w, h = cv2 . boundingRect ( cnt )63

64 x_coor . append ( x )65 y_coor . append ( y )66 x_withDimension . append ( x + w)67 y_withDimension . append ( y + h )68

69 x _ s t a r t = min ( x_coor )70 y _ s t a r t = min ( y_coor )71

72 x_end = max( x_withDimension )73 y_end = max( y_withDimension )74

75 # Drawing rec tang le around p lan t76 cv2 . rec tang le ( image , ( x_s ta r t , y _ s t a r t ) , ( x_end , y_end ) , (0 , 255 , 0) , 1)77 cv2 . imwr i te ( ’ / home / p i / app / s t a t i c / images / imageOutput . png ’ , image )78

79 i f re fe rence_ob jec t_he igh t_px == 0:80 r e t u r n �1.0, �181 else :82 # Ca lcu la te the p r o p o r t i o n a l i t y between 1 cm of re ference ob jec t and amount o f p i x e l83 image_cm = (REFERENCE_OBJECT_HEIGHT_CM ∗ f l o a t ( image_height_px ) ) / f l o a t (

re fe rence_ob jec t_he igh t_px )84 point_1_cm = f l o a t ( image_height_px ) / image_cm85 p lan t_he igh t_px = ( y_end � y _ s t a r t )86 p r i n t ( " p l an t he igh t px : " , p lan t_he igh t_px )87 p lan t_he igh t = ( f l o a t ( p lan t_he igh t_px ) / point_1_cm ) � VASE_HEIGHT_CM88 p r i n t ( " p l an t he igh t cm: " , p l an t_he igh t )89 rounded_plant_height = round ( p lan t_he igh t , 1)90

91 r e t u r n rounded_plant_height , re fe rence_ob jec t_he igh t_px

Page 69: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

C. Python Code: Generate Scale

1 impor t cv22 impor t numpy as np3

4 def generate_scale (REFERENCE_OBJECT_HEIGHT_PX, REFERENCE_OBJECT_HEIGHT_CM) :5

6 image = cv2 . imread ( " / home / p i / app / s t a t i c / images / imageInput . png " )7

8 image_height_px , image_width_px , Image_channels = image . shape9

10 # create a whi te image wi th dimension 640 x 36011 white_image = np . zeros ( ( image_height_px , 50) , np . u i n t 8 )12 white_image . f i l l (255)13

14 # draw v e r t i c a l l i n e15 cv2 . l i n e ( white_image , (10 , image_height_px ) , (10 ,0 ) , (0 ,255 ,0 ) ,2 )16

17 i f (REFERENCE_OBJECT_HEIGHT_PX == �1) :18 cv2 . imwr i te ( ’ / home / p i / app / s t a t i c / images / sca le . png ’ , white_image )19

20 else :21 image_cm = ( f l o a t (REFERENCE_OBJECT_HEIGHT_CM) ∗ f l o a t ( image_height_px ) ) / f l o a t (

REFERENCE_OBJECT_HEIGHT_PX)22 p r i n t ( " imageCM : %d " % image_cm )23

24 p ropo t i ona l = ( image_cm / f l o a t ( image_height_px ) )25 p r i n t ( " p ropo t i ona l : %f " % propo t i ona l )26

27 # d is tance i n p i x e l f o r 1 cm28 point_1_cm = i n t (1 / p ropo t i ona l )29 p r i n t ( " point_1_cm : %d " % point_1_cm )30 h o r i z o n t a l _ l i n e _ p o s i t i o n = image_height_px31 # create a h o r i z o n t a l l i n e w i th amount o f IMAGE_MAX_CM32 s t r ingToPut = 033

34 whi le ( h o r i z o n t a l _ l i n e _ p o s i t i o n >= 0) :35 # p r i n t ( " y pos : %f " % h o r i z o n t a l _ l i n e _ p o s i t i o n )36 # draw h o r i z o n t a l l i n e w i th space of p ropo t i ona l37 cv2 . l i n e ( white_image , (0 , h o r i z o n t a l _ l i n e _ p o s i t i o n ) , (21 , h o r i z o n t a l _ l i n e _ p o s i t i o n ) , (0 ,

255 , 0) , 2)38 # p r i n t ( " put t e x t : %d " % st r ingToPut )39 cv2 . putText ( white_image , "%d " % str ingToPut , (23 , h o r i z o n t a l _ l i n e _ p o s i t i o n + 3) , cv2 .

FONT_HERSHEY_SIMPLEX, 0 .4 , (0 , 0 , 0) , 1)40 h o r i z o n t a l _ l i n e _ p o s i t i o n = i n t ( h o r i z o n t a l _ l i n e _ p o s i t i o n � point_1_cm )41 s t r ingToPut += 142

43 cv2 . imwr i te ( ’ / home / p i / app / s t a t i c / images / sca le . png ’ , white_image )

Page 70: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

D. Python Code: ScheduledMeasurement

1 impor t cv22 impor t numpy as np3 impor t u r l l i b4 impor t sys5 impor t datet ime6

7

8 def scheduled_take_pic ture ( ) :9

10 timeTaken = s t r ( datet ime . datet ime . now ( ) . rep lace ( microsecond =0) . i so fo rmat ( ) )11 pathToCamera = " h t t p : / / admin : raspberry@192 . 1 6 8 . 2 . 2 / image / jpeg . cg i ? p r o f i l e i d =2 "12 pathToInput = " / home / p i / app / r e s u l t s / imageInput "+ "�"+timeTaken+" . png "13 u r l l i b . u r l r e t r i e v e ( pathToCamera , pathToInput )14

15 pathToI l luminanceSta te = " h t t p : / / l o c a l h o s t :8080/ r e s t / i tems / Mul t iSensor_A_I l luminance / s t a te "16 response = u r l l i b . ur lopen ( pathToI l luminanceSta te )17 i l l um inance_va lue = response . read ( )18 r e t u r n i l luminance_va lue , timeTaken19

20 def schedu led_ca lcu la te_p lan t_he igh t (VASE_HEIGHT_CM, REFERENCE_OBJECT_HEIGHT_CM, TIME_TAKEN) :21

22

23 # Read i n the image24 image = cv2 . imread ( " / home / p i / app / r e s u l t s / imageInput "+ "�"+TIME_TAKEN+" . png " )25 i f image i s None :26 p r i n t ( ’ Could not open or f i n d the image ’ )27 e x i t ( 0 )28

29 image_height_px , image_width_px , image_channels = image . shape30

31 d i l a t e 1 = cv2 . d i l a t e ( image , None , i t e r a t i o n s =1)32 bg_img = cv2 . b i l a t e r a l F i l t e r ( d i l a t e 1 , 9 , 50 , 50)33 d i f f _ i m g = cv2 . a b s d i f f ( image , bg_img )34

35 # Cover to grayscale and b l u r36 greyscale = cv2 . cv tCo lo r ( d i f f _ img , cv2 .COLOR_BGR2GRAY)37 re t3 , th3 = cv2 . th resho ld ( greyscale , 0 , 255 , cv2 .THRESH_BINARY + cv2 .THRESH_OTSU)38

39 # Detect edges and close gaps40 canny_output = cv2 . Canny ( th3 , 0 , 255)41 d i l a t e = cv2 . d i l a t e ( canny_output , None , i t e r a t i o n s =1)42

43 # Get the contours o f the shapes44 contours , h ie ra rchy = cv2 . f indContours ( d i l a t e . copy ( ) , cv2 .RETR_EXTERNAL, cv2 .CHAIN_APPROX_SIMPLE)45

46 x_coor = [ ]

Page 71: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

D. Python Code: Scheduled Measurement 71

47 y_coor = [ ]48 x_withDimension = [ ]49 y_withDimension = [ ]50 re f_ob jec t_he igh t_px = 051

52 # Sorted contour by area from the l a r g e s t to smal les t , take only f i v e l a r g e s t area53 sor ted_contours = sor ted ( contours , key=cv2 . contourArea , reverse=True ) [ : 5 ]54

55 re fe rence_ob jec t_he igh t_px = 056

57 f o r cnt i n sor ted_contours :58

59 area = cv2 . contourArea ( cnt )60

61 # Ca lcu la te arch leng th o f the cu r ren t contour62 p e r i = cv2 . arcLength ( cnt , True )63 eps i l on = 0.02 ∗ p e r i64

65 # approximate i t s po lygonal curve66 approx = cv2 . approxPolyDP ( cnt , eps i lon , True )67

68 #cv2 . drawContours ( image , [ cn t ] , �1, (0 , 0 , 255) , 1)69 #cv2 . drawContours ( image , [ approx ] , �1, (0 , 0 , 255) , 1)70

71 # i f approximated contour has fou r po in ts ,72 # then we can assume t h a t we have found the re ference ob jec t73 i f area > 100 and len ( approx ) == 4:74 x_ro , y_ro , w_ro , h_ro = cv2 . boundingRect ( cn t )75 cv2 . rec tang le ( image , ( x_ro , y_ro ) , ( x_ro + w_ro , y_ro + h_ro ) , (255 , 0 , 0) , 1)76 re fe rence_ob jec t_he igh t_px = h_ro77

78 else :79 x , y , w, h = cv2 . boundingRect ( cnt )80

81 x_coor . append ( x )82 y_coor . append ( y )83 x_withDimension . append ( x + w)84 y_withDimension . append ( y + h )85

86 x _ s t a r t = min ( x_coor )87 y _ s t a r t = min ( y_coor )88

89 x_end = max( x_withDimension )90 y_end = max( y_withDimension )91

92 # Drawing rec tang le around p lan t93 cv2 . rec tang le ( image , ( x_s ta r t , y _ s t a r t ) , ( x_end , y_end ) , (0 , 255 , 0) , 1)94 cv2 . imwr i te ( " / home / p i / app / r e s u l t s / imageOutput "+ "�"+TIME_TAKEN+" . png " , image )95

96 i f re fe rence_ob jec t_he igh t_px == 0:97 r e t u r n �1.0, �198 else :99 # Ca lcu la te the p r o p o r t i o n a l i t y between 1 cm of re ference ob jec t and amount o f p i x e l

100 image_cm = ( f l o a t (REFERENCE_OBJECT_HEIGHT_CM) ∗ f l o a t ( image_height_px ) ) / f l o a t (re fe rence_ob jec t_he igh t_px )

101 point_1_cm = f l o a t ( image_height_px ) / image_cm102 p lan t_he igh t_px = ( y_end � y _ s t a r t )103 p r i n t ( " p l an t he igh t px : " , p lan t_he igh t_px )104 p lan t_he igh t = ( f l o a t ( p lan t_he igh t_px ) / point_1_cm ) � f l o a t (VASE_HEIGHT_CM)105 # p r i n t ( " p l an t he igh t cm: " , p l an t_he igh t )

Page 72: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

D. Python Code: Scheduled Measurement 72

106

107 rounded_plant_height = round ( p lan t_he igh t , 1)108

109 r e t u r n rounded_plant_height , re fe rence_ob jec t_he igh t_px110

111

112 def scheduled_generate_scale (REFERENCE_OBJECT_HEIGHT_PX, REFERENCE_OBJECT_HEIGHT_CM, TIME_TAKEN) :113

114 image = cv2 . imread ( " / home / p i / app / r e s u l t s / imageInput "+ "�"+TIME_TAKEN+" . png " )115

116 image_height_px , image_width_px , image_channels = image . shape117

118 # create a whi te image wi th dimension 640 x 360119 white_image = np . zeros ( ( image_height_px , 50) , np . u i n t 8 )120 white_image . f i l l (255)121

122 # draw v e r t i c a l l i n e123 cv2 . l i n e ( white_image , (10 , image_height_px ) , (10 ,0 ) , (0 ,255 ,0 ) ,2 )124

125

126 i f (REFERENCE_OBJECT_HEIGHT_PX == �1) :127 cv2 . imwr i te ( " / home / p i / app / r e s u l t s / sca le "+ "�"+TIME_TAKEN+" . png " , white_image )128

129 else :130

131 image_cm = ( f l o a t (REFERENCE_OBJECT_HEIGHT_CM) ∗ f l o a t ( image_height_px ) ) / f l o a t (REFERENCE_OBJECT_HEIGHT_PX)

132 p r i n t ( " imageCM : %d " % image_cm )133

134 p ropo t i ona l = ( image_cm / f l o a t ( image_height_px ) )135 p r i n t ( " p ropo t i ona l : %f " % propo t i ona l )136

137

138 # d is tance i n p i x e l f o r 1 cm139 point_1_cm = i n t (1 / p ropo t i ona l )140 p r i n t ( " point_1_cm : %d " % point_1_cm )141 h o r i z o n t a l _ l i n e _ p o s i t i o n = image_height_px142 # create a h o r i z o n t a l l i n e w i th amount o f IMAGE_MAX_CM143 s t r ingToPut = 0144

145 whi le ( h o r i z o n t a l _ l i n e _ p o s i t i o n >= 0) :146 # p r i n t ( " y pos : %f " % h o r i z o n t a l _ l i n e _ p o s i t i o n )147 # draw h o r i z o n t a l l i n e w i th space of p ropo t i ona l148 cv2 . l i n e ( white_image , (0 , h o r i z o n t a l _ l i n e _ p o s i t i o n ) , (21 , h o r i z o n t a l _ l i n e _ p o s i t i o n ) , (0 ,

255 , 0) , 2)149 # p r i n t ( " put t e x t : %d " % st r ingToPut )150 cv2 . putText ( white_image , "%d " % str ingToPut , (23 , h o r i z o n t a l _ l i n e _ p o s i t i o n + 3) , cv2 .

FONT_HERSHEY_SIMPLEX, 0 .4 , (0 , 0 , 0) , 1)151 h o r i z o n t a l _ l i n e _ p o s i t i o n = h o r i z o n t a l _ l i n e _ p o s i t i o n � point_1_cm152 s t r ingToPut += 1153

154 cv2 . imwr i te ( " / home / p i / app / r e s u l t s / sca le "+ "�"+TIME_TAKEN+" . png " , white_image )155

156

157 vase_height = f l o a t ( sys . argv [ 1 ] )158 re ference_object_height_cm = f l o a t ( sys . argv [ 2 ] )159

160

161 # f i r s t take p i c t u r e162 i l l uminance_va lue , timeTaken = scheduled_take_pic ture ( )

Page 73: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

D. Python Code: Scheduled Measurement 73

163

164 # measure p lan t he igh t165 ca lcu la ted_he igh t , my_reference_object_height_px = schedu led_ca lcu la te_p lan t_he igh t ( vase_height ,

reference_object_height_cm , timeTaken )166

167 # generate scale168 scheduled_generate_scale ( my_reference_object_height_px , reference_object_height_cm , timeTaken )169

170 # w r i t e the r e s u l t i n a f i l e171 f = open ( " / home / p i / app / r e s u l t s / p lan t�growth�scheduled . csv " , " a " )172 f . w r i t e ( " imageInput "+ "�"+timeTaken+" . png "+" , "+ s t r ( ca l cu la ted_he igh t ) + "cm"+" , "+ s t r ( i l l um inance_va lue ) +

" lux " )173 f . w r i t e ( " \ n " )174 f . c lose ( )

Page 74: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

E. Frontend

1 < !DOCTYPE html>2 < l i n k r e l = " s t y l eshee t " type=" t e x t / css " h re f = " { { u r l _ f o r ( ’ s t a t i c ’ , f i lename=" s t y l e . css " ) } } ">3 <head>4 < t i t l e >Prototype � Plant Growth Ana lys is< / t i t l e >5 < / head>6 <body>7 <h2 c lass=" labe l_ image_input "> Inpu t P i c tu re < / h2>8

9 <d iv c lass=" imageRow">10 <d iv c lass=" imageColumn " s t y l e = " d i sp lay : f l e x ">11 <img c lass=" image_input " s rc= " { { u r l _ f o r ( ’ s t a t i c ’ , f i lename=" images / imageInput . png " ) } } "

a l t = " i npu t " t i t l e = " Inpu t image ">12 <img c lass=" image_scale " src= " { { u r l _ f o r ( ’ s t a t i c ’ , f i lename=" images / sca le . png " ) } } " a l t = "

sca le " t i t l e = " scale image ">13 < / d i v >14 < / d i v >15

16 <h2 c lass=" label_ image_output ">Output P i c tu re < / h2>17 <img c lass=" image_output " s rc= " { { u r l _ f o r ( ’ s t a t i c ’ , f i lename=" images / imageOutput . png " ) } } " a l t = "

image " t i t l e = " Output image ">18

19 <br>20 <hr>21

22 <d iv c lass=" row3 " s t y l e = " d i sp lay : f l e x ">23 <form method=" post " ac t i on =" / " c lass=" r a s p i _ c o n t r o l ">24 < l a b e l c lass=" dis tance�l a b e l "><st rong>Enable Sel f�Serv ice mode : < / s t rong>< / l a b e l >25 < inpu t type=" checkbox " value="ADMIN_SELFSERVICE" name=" checkbox " checked>26 < l a b e l c lass=" dis tance�l a b e l ">Check to a l low s e l f s e r v i c e ; Unchecked to connect to SSID : "

{ { ss id } } " . Then reboot or shutdown< / l a b e l >27 <br>28 <br>29 < inpu t type=" submit " value=" Shutdown Pi " name=" but ton " / >30 < inpu t type=" submit " value=" Reboot Pi " name=" but ton " / >31 < / form>32 < / d i v >33

34 <br>35 <hr>36

37 <h3>Reference Object< / h3>38 <d iv c lass=" row4 " s t y l e = " d i sp lay : f l e x ">39 <form method=" post " ac t i on =" / " autocomplete=" o f f " c lass=" opencv_contro l ">40 < l a b e l c lass=" dis tance�l a b e l "><st rong>Reference Object he igh t [cm ] : < / s t rong>< / l a b e l >41 < inpu t type=" number " min=" 0.0 " p laceho lder= " re ference ob jec t he igh t i n cm" t i t l e = "

re ference Object he igh t i n cm" name=" inpu t_ re fe rence_ob jec t_he igh t " step=" any " value = { {re ference_object_height_cm } } >

42 <br>43 <br>

Page 75: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

E. Frontend 75

44 < l a b e l c lass=" dis tance�l a b e l "><st rong>Vase he igh t [cm ] : < / s t rong>< / l a b e l >45 < inpu t type=" number " min=" 0.0 " p laceho lder= " vase he igh t i n cm" t i t l e = " vase ’ s he igh t i n cm

" name=" input_vase_height " step=" any " value = { { vase_height } } >46 <br>47 <br>48 < inpu t type=" submit " name=" but ton_reference_object_and_vase " value=" save values " / >49 < / form>50 < / d i v >51

52 <br>53 <hr>54

55 <h3>Software< / h3>56 <d iv c lass=" row2 " s t y l e = " d i sp lay : f l e x ">57 <form method=" post " ac t i on =" / " c lass=" camera_control ">58 < inpu t type=" submit " value=" Take P ic tu re " name=" but ton " / >59 < l a b e l ><st rong> I l l um inance value i n i npu t p i cu tu re [ l ux ] : < / s t rong> { { i l l um inance_va lue

} } < / l a b e l >60 < / form>61 < / d i v >62 <br>63 <d iv c lass=" row5 " s t y l e = " d i sp lay : f l e x ">64 <form method=" post " ac t i on =" / " autocomplete=" o f f " c lass=" opencv_contro l ">65 < inpu t type=" submit " value=" measure i l l um inance " name=" bu t ton_ge t_ i l l um inance " / >66 < l a b e l c lass=" i l l uminance�l a b e l "><st rong> I l l um inance value now [ lux ] : < / s t rong> { {

i l l uminance_va lue } } < / l a b e l >67 <br>68 <br>69 < inpu t type=" submit " i d = " bu t tonCa lcu la teP lan t " value=" c a l c u l a t e p lan t he igh t " name="

but ton " / >70 < l a b e l c lass=" r e s u l t�l a b e l "><st rong>Calcu la ted Plan t Height [cm ] : < / s t rong> { {

ca l cu la ted_he igh t } } < / l a b e l >71 <br>72 <br>73 < l a b e l c lass=" schedule�l a b e l "><st rong>Schedule Measurement< / s t rong>< / l a b e l >74 <br>75 < l a b e l c lass=" schedule�l a b e l "><st rong> S t a r t Date and Time : < / s t rong> { { time_date_now } }

< / l a b e l >76 <br>77 <br>78 < l a b e l c lass=" schedule�l a b e l "><st rong>Time I n t e r v a l [ hour ] : < / s t rong>< / l a b e l >79 < inpu t type=" number " min=" 0 " p laceho lder= " enter t ime i n t e r v a l i n hour " t i t l e = " i n t e r v a l

t ime of measurement i n hour " name=" i npu t_schedu l i ng_ in te r va l _ t ime " value = { { schedu le_ in te rva l } }>

80 <br>81 <br>82 < inpu t type=" submit " value=" s t a r t schedul ing " name=" but ton " / >83 < inpu t type=" submit " value=" download r e s u l t f i l e " name=" but ton " / >84 <br>85 <br>86 < inpu t type=" submit " value=" stop schedul ing " name=" but ton " / >87 < inpu t type=" submit " value=" de le te r e s u l t " name=" but ton " / >88

89 < / form>90 < / d i v >91

92 < / body>93 < / html>

Page 76: Diplomarbeit im Fachbereich Elektrotechnik & Informatik an der … › haw › volltexte › 2019 › 5322 › ... · 2019-12-13 · Rinaldy Juan Sutyono Title of the Bachelorthesis

Declaration

I declare within the meaning of section 16(5) APSO-TI-BM of the Examination and StudyRegulations of the Mechatronics Engineering that: this Bachelor report has been completedby myself independently without outside help and only the defined sources and study aidswere used. Sections that reflect the thoughts or works of others are made known throughthe definition of sources.

Hamburg, November 1, 2019City, Date sign