Transcript
Page 1: Shape from Shading through Shape Evolutionydawei/files/pdf/sfs_evolution... · 2019-11-27 · • Analytical solutions: make substantial assumptions • Learning based approaches:

Shape from Shading through Shape EvolutionDawei Yang, Jia Deng

Computer Science and Engineering, University of Michigan, Ann Arbor

F (x, y, z) = max(|x|, |y|, |z|)− L

2

F (x, y, z) = max

(x2 + y2

R2,|z|H

)− 1

F (x, y, z) = x2 + y2 + z2 −R2

F (x, y, z) = min(x2 + y2

R2− z2

H2,−z, z −H)

x

y

z

sqr

sqr

sqr

=

−R2

z abs

=

x

y

sqr

sqr

=

max

1/R2

1/R2

1/H−1

x

y

z

abs

abs

abs

=

max

−L/2

z

sqr

=

x

y

sqr

sqr

=1/R2

1/R2

−1/H2

−1

=

−H

=

max

Shape Implicit Surface Computation Graph

Computation Graph Representation • Shapes represented by implicit functions • Functions encoded as computation graphs

Task • Recover a normal map from a single image • Usually assume uniform diffuse surface (Lambertian)

Input Normals (visualization)

Previous work • Analytical solutions: make substantial assumptions • Learning based approaches: more flexible, data-hungry

Advantages • No expensive manual collection of shapes • Unlimited number of new shapes

Shapes

Evolve

Evolve

...

EvaluateTrain

EvaluateTrain

......

Image-to-Normal Network

Render

Render

Render

Images Normals

Real Images

Real Images

...

Incrementally evolve a dataset for training • A dataset of shapes is used for image synthesis • In each epoch, the shape with the highest fitness is added to the shape dataset • The fitness is the validation performance of the network trained on the shape dataset plus the current shape to be evaluated

Incrementally train a network for evolution • The network is trained on the synthetic images • The network is evaluated on the real images to give feedback to the evolution of shapes • The network trained on the incremental dataset is retained for the next evolution epoch

x

y

z

=F1

(1)

(1)

(1)

x

y

z

(2)

(2)

(2)

=F2

x

y

z

=F1

=F2

=min

x

y

z

=

Linear Transformation • Insert a transformation layer

x’

y’

z’

x

y

z

=

−b1

−b2

−b3(λA)−1

Shape Composition • Union/intersection/difference • Combine two computation graphs

Funion(x, y, z) = min(F1(x, y, z), F2(x, y, z))

Fintersection(x, y, z) = max(F1(x, y, z), F2(x, y, z))

Fdifference(1,2)(x, y, z) = max(F1(x, y, z),−F2(x, y, z)).

Generate New Shapes • Random sample two parents from the population • Apply random rotation, scaling and translation to the parents • Create the child by union/intersection/differenceFitness Propagation • Each shape, including parent shapes, are evaluated • A parent is assigned the best fitness scores of its children and itselfComputational Resource Constraint • Memory consumption will double if no constraint • Allow the number of nodes to grow linearly at most • Shapes with too large graphs will be discarded when constructing the new populationPromoting Diversity • With external fitness only, the shapes will evolve to a homogeneous distribution • A fixed proportion is sampled only according to the computation graph sizeDiscarding Trivial Compositions • Remove degenerated cases in union/intersection/ difference when creating the child

Summary Stats ↑ Errors ↓≤ 11.25◦ ≤ 22.5◦ ≤ 30◦ MAE MSE

Random∗ 1.9% 7.5% 13.1% 1.1627 1.3071

SIRFS[1] 20.4% 53.3% 70.9% 0.4575 0.2964

ShapeNet-vanilla 12.7% 42.4% 62.8% 0.4831 0.2901ShapeNet-incremental 15.2% 48.4% 66.4% 0.4597 0.2717

Ours-no-evolution-plus-ShapeNet 14.2% 53.0% 72.1% 0.4232 0.2233Ours-no-evolution 17.3% 50.2% 66.1% 0.4673 0.2903Ours-no-feedback 19.1% 49.5% 66.3% 0.4477 0.2624Ours 21.6% 55.5% 73.5% 0.4064 0.2204

Shape from Shading • MIT-Berkeley Intrinsic Images Dataset [1] • 10 training images + 10 test images • Comparison with ShapeNet [2] shapes

Input

PredictionOur

AngleError

AngleError

TruthGround

PredictionSIRFS

Evolve towards a Target Shape • Measure: intersection over union (IoU)

Target Evolved shapes

t = 50t = 5 t = 100 t = 200

t = 10 t = 100 t = 200 t = 400

[1] J. T. Barron and J. Malik. Shape, illumination, and reflectancefrom shading. TPAMI, 2015.

[2] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang,Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu.ShapeNet: An Information-Rich 3D Model Repository. TechnicalReport arXiv:1512.03012 [cs.GR], 2015.

This work is partially supported by the National Science Foundationunder Grant No. 1617767.

Our approach • Use synthetic images to train a shape-from-shading network • Evolve a set of shapes from scratch to render the images • The network gives feedback on how good the shapes are

Top Related