adversarial large-scale root gap inpainting with policy ......adversarial large-scale root gap...

1
Adversarial Large-scale Root Gap Inpainting with Policy Gradient Method Mario Valerio Giuffrida [email protected] Peter Doerner [email protected] Sotirios A Tsaftaris [email protected] Computer Vision Problems in Plant Phenotyping (CVPPP) 2019 http://chickpearoots.org • http://www.valeriogiuffrida.academy • http://tsaftaris.com Hao Chen [email protected] Root study is crucial for plant phenotyping. Non-invasive root imaging remains challenging. Segmentation is a fundamental step for traits identification. Gaps are present on root due to soil opacity. Root traits extraction is restricted by these gaps. Lack of ground truth data in chickpea make model training difficult. Motivation and Contributions Proposed Architecture Policy Gradient BB/P023487/1 EP/N509644/1 Experimental Results Conclusions Inpainting to solve the root gap recovery problem. Adversarial learning to train the model, including a local discriminator and a global discriminator. Policy Gradient from reinforcement learning to endow the model with a global view of the whole root, while the model is still trained with root patches. Synthetic root data (Lobet et al., 2016) to train the model instead of real chickpea data, with augmentation to bridge the gap. Our main contributions: DL real/fake (a) Augmentation (b) Patches Extraction (f) Local Discriminator (d) Generator (h) Global Discriminator DG Ground Truths Corrupted Inputs Generator Outputs (e) Predictions of Probability Data Pipeline Generator Discriminator Fake Patches Threshold 0/1 Real Patches (c) Introduce Gaps Original Transformed Encoder GE G Decoder GD Encoder GE 1x1 Conv Avg. Pool Mapping Function Encoder GE 1x1 Conv Avg. Pool Mapping Function Sigmoid (g) Bernoulli Sample & Put Back Complete Full Root Inpainted Full Root 512 512 . Dot Product Similarity (Reward) Model consists of inpainting generator, local and global discriminator: A. Data Augmentation: make synthetic root more similar to chickpea. B. Patch Extraction: divide a whole root into small patches. C. Introduce Gaps: add known gaps for training an inpainting model. D. Generator: inpaint the corrupted input roots. E. Predictions: probability maps produced by generator. F. Local Discriminator: a patch discriminator improving local details. G. Bernoulli Sample & Put Back: re-compose the predicted patches from generator as a whole inpainted root. H. Global Discriminator: computes similarity between the complete and the inpainted roots to improve completeness. Policy Gradient updates the generator with the global discriminator at whole root level. A way to include non-differentiable process into training, broadly used in reinforcement learning. 1. Synthetic Root Patches Inpainting Models MSE Within Gaps ↓ # Pixel Diff. ↑ # Connected Components Diff. ↑ Chen et al. (2018) 0.81 (0.17) 0.63 (0.53) 0.81 (0.27) Ours (w/o global) 0.78 (0.17) 0.73 (0.67) 0.91 (0.5) Ours (w/ global) 0.73 (0.17) 0.77 (0.71) 0.97 (0.85) With only the local discriminator, MSE error is reduced and more complete results are shown. Global discriminator further boosts the performance. Our model better inpaints chickpea root segmentation mask patches without actually training on them. 3.Chickpea Whole Root Inpainting We use RIA-J (Lobet 2016) to measure the root traits, showing that our model could ‘repair’ the corrupted traits caused by gaps. Models MSE Within Gaps ↓ # Pixel Diff. ↑ # Connected Components Diff. ↑ Chen et al. (2018) 0.91 (0.08) 0.81 (0.51) 0.87 (0.67) Ours (w/o global) 0.87 (0.08) 0.82 (0.52) 0.95 (0.83) Ours (w/ global) 0.84 (0.07) 0.84 (0.56) 0.96 (0.83) 2. Chickpea Root Patches Inpainting Models Complete↑ # Tips ↑ Root Length↑ Convex Hull↑ Chen et al. (2018) 0.59 (0.39) 0.20 (0.11) 0.08 (0.08) 0.61 (0.08) Ours (w/o global) 0.64 (0.47) 0.25 (0.15) 0.07 (0.07) 0.64 (0.10) Ours (w/ global) 0.69 (0.55) 0.29 (0.18) 0.08 (0.08) 0.70 (0.11) Our model generates the most complete and accurate inpainting results on whole chickpea root masks. Input Baseline Ours (w/o global) Ours (w/ global) (a) (b) An effective way for filling gaps present on the root segmentation masks visualized via affordable plant root phenotyping systems. Adversarial Learning and Policy Gradient encourage local high-quality and global complete inpainting results. Experiments show state-of-the-art results. General post-processing technique for thin-structure gap recovery. Qualitative results comparison on whole root level. (a) and (b) are examples of real chickpea roots inpainted with the method proposed by Hao et al. (2018) and two variants of the proposed method without and with global discriminator. References Hao et al. Root gap correction with a deep learning inpainting model. BMVC- CVPPP 18 Lobet et al. Library of simulated root images. Zenodo 16 Qualitative results on real chickpea images on patch level. We use different colours to indicate different segments caused by gaps

Upload: others

Post on 22-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Adversarial Large-scale Root Gap Inpainting with Policy ......Adversarial Large-scale Root Gap Inpainting with Policy Gradient Method Mario Valerio Giuffrida v.giuffrida@ed.ac.uk Peter

Adversarial Large-scale Root Gap Inpainting with Policy Gradient MethodMario Valerio [email protected]

Peter [email protected]

Sotirios A [email protected]

Computer Vision Problems in Plant Phenotyping (CVPPP) 2019 http://chickpearoots.org • http://www.valeriogiuffrida.academy • http://tsaftaris.com

Hao [email protected]

• Root study is crucial for plant phenotyping.• Non-invasive root imaging remains challenging.• Segmentation is a fundamental step for traits identification.• Gaps are present on root due to soil opacity.• Root traits extraction is restricted by these gaps.• Lack of ground truth data in chickpea make model training difficult.

Motivation and Contributions

Proposed Architecture

Policy Gradient

BB/P023487/1 EP/N509644/1

Experimental Results

Conclusions

• Inpainting to solve the root gap recovery problem.• Adversarial learning to train the model, including a local discriminator

and a global discriminator.• Policy Gradient from reinforcement learning to endow the model with

a global view of the whole root, while the model is still trained withroot patches.

• Synthetic root data (Lobet et al., 2016) to train the model instead ofreal chickpea data, with augmentation to bridge the gap.

Our main contributions:

DL

real/fake

(a) Augmentation (b) Patches Extraction (f) Local Discriminator

(d) Generator

(h) Global Discriminator DG

GroundTruths!

CorruptedInputs

Generator Outputs

(e) Predictions of Probability

Data Pipeline

Generator

Discriminator

Fake Patches

Threshold 0/1

Real Patches

(c) Introduce Gaps Original Transformed

EncoderGE

G

DecoderGD

EncoderGE

1x1 Conv

Avg. Pool

Mapping Function "

EncoderGE

1x1

Con

vAv

g. P

ool

Mapping Function "Sigmoid

(g) Bernoulli Sample&

Put Back

Complete Full Root

Inpainted Full Root

512 512.

Dot Product

Similarity (Reward)

Model consists of inpainting generator, local and global discriminator:A. Data Augmentation: make synthetic root more similar to chickpea.B. Patch Extraction: divide a whole root into small patches.C. Introduce Gaps: add known gaps for training an inpainting model.D. Generator: inpaint the corrupted input roots.E. Predictions: probability maps produced by generator.F. Local Discriminator: a patch discriminator improving local details.G. Bernoulli Sample & Put Back: re-compose the predicted patches from

generator as a whole inpainted root.H. Global Discriminator: computes similarity between the complete and

the inpainted roots to improve completeness.

• Policy Gradient updates the generator with the global discriminator at whole root level.

• A way to include non-differentiable process into training, broadly used in reinforcement learning.

1. Synthetic Root Patches Inpainting

Models MSE Within Gaps ↓

# Pixel Diff. ↑ # Connected Components Diff. ↑

Chen et al. (2018) 0.81 (0.17) 0.63 (0.53) 0.81 (0.27)Ours (w/o global) 0.78 (0.17) 0.73 (0.67) 0.91 (0.5)Ours (w/ global) 0.73 (0.17) 0.77 (0.71) 0.97 (0.85)

• With only the local discriminator, MSE error is reduced and more complete results are shown.

• Global discriminator further boosts the performance.

• Our model better inpaints chickpea root segmentation mask patches without actually training on them.

3.Chickpea Whole Root Inpainting

• We use RIA-J (Lobet 2016) to measure the root traits, showing that our model could ‘repair’ the corrupted traits caused by gaps.

Models MSE Within Gaps ↓

# Pixel Diff. ↑ # Connected Components Diff. ↑

Chen et al. (2018) 0.91 (0.08) 0.81 (0.51) 0.87 (0.67)Ours (w/o global) 0.87 (0.08) 0.82 (0.52) 0.95 (0.83)Ours (w/ global) 0.84 (0.07) 0.84 (0.56) 0.96 (0.83)

2. Chickpea Root Patches Inpainting

Models Complete↑ # Tips ↑ Root Length↑ Convex Hull↑Chen et al. (2018) 0.59 (0.39) 0.20 (0.11) 0.08 (0.08) 0.61 (0.08)Ours (w/o global) 0.64 (0.47) 0.25 (0.15) 0.07 (0.07) 0.64 (0.10)Ours (w/ global) 0.69 (0.55) 0.29 (0.18) 0.08 (0.08) 0.70 (0.11)

• Our model generates the most complete and accurate inpainting results on whole chickpea root masks.

Input Baseline Ours (w/o global) Ours (w/ global)(a)

(b)

• An effective way for filling gaps present on the root segmentation masks visualized via affordable plant root phenotyping systems.

• Adversarial Learning and Policy Gradient encourage local high-qualityand global complete inpainting results.

• Experiments show state-of-the-art results.• General post-processing technique for thin-structure gap recovery.

Qualitative results comparison on whole root level. (a) and (b) are examples of real chickpea roots inpainted with the method proposed by Hao et al. (2018) and two variants of the proposed method without and with global discriminator.

ReferencesHao et al. Root gap correction with a deep learning inpainting model. BMVC-CVPPP 18

Lobet et al. Library of simulated root images. Zenodo 16

Qualitative results on real chickpea images on patch level. We use different colours to indicate different segments caused by gaps