accelerated, parallel and proximal coordinate descent ipam february 2014 approx peter richtárik...

Post on 14-Dec-2015

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Accelerated, Parallel and PROXimal coordinate descent

IPAMFebruary 2014

A P PROXPeter Richtárik

(Joint work with Olivier Fercoq - arXiv:1312.5799)

Contributions

Variants of Randomized Coordinate Descent Methods

• Block– can operate on “blocks” of

coordinates – as opposed to just on individual

coordinates

• General – applies to “general” (=smooth

convex) functions – as opposed to special ones such as

quadratics

• Proximal– admits a “nonsmooth regularizer”

that is kept intact in solving subproblems

– regularizer not smoothed, nor approximated

• Parallel – operates on multiple blocks /

coordinates in parallel– as opposed to just 1 block /

coordinate at a time

• Accelerated– achieves O(1/k^2) convergence rate

for convex functions– as opposed to O(1/k)

• Efficient– avoids adding two full feature

vectors

Brief History of Randomized Coordinate Descent Methods

+ new long stepsizes

Introduction

I. Block

Structure

II. Block

Sampling

IV. Fast or

Normal?

III. Proximal

Setup

I. Block Structure

I. Block Structure

I. Block Structure

I. Block Structure

I. Block Structure

I. Block StructureN = # coordinates

(variables)

n = # blocks

II. Block Sampling

Block sampling

Average # blocks selected by the sampling

III. Proximal Setup

Convex & Smooth Convex & Nonsmooth

Loss Regularizer

III. Proximal SetupLoss Functions: Examples

Quadratic loss

L-infinity

L1 regression

Exponential loss

Logistic loss

Square hinge loss

BKBG’11RT’11bTBRS’13RT ’13a

FR’13

III. Proximal SetupRegularizers: Examples

No regularizer Weighted L1 norm

Weighted L2 normBox constraints

e.g., SVM dual

e.g., LASSO

The Algorithm

APPROX

Olivier Fercoq and P.R. Accelerated, parallel and proximal coordinate descent, arXiv:1312.5799, December 2013

Part CRANDOMIZED

COORDINATE DESCENT

Part BGRADIENT METHODS

B1GRADIENT DESCENT

B2PROJECTED

GRADIENT DESCENT

B3PROXIMAL

GRADIENT DESCENT

B4FAST PROXIMAL

GRADIENT DESCENT

C1PROXIMAL

COORDINATE DESCENT

C2PARALLEL

COORDINATE DESCENT

C3DISTRIBUTED

COORDINATE DESCENT

C4FAST PARALLEL

COORDINATE DESCENT

new FISTAISTA

Olivier Fercoq and P.R. Accelerated, parallel and proximal coordinate descent, arXiv:1312.5799, Dec 2013

PCDM

P.R. and Martin Takac. Parallel coordinate descent methods for big data optimization, arXiv:1212.0873, December 2012IMA Fox Prize in Numerical Analysis, 2013

2D Example

Convergence Rate

Convergence Rate

average # coordinates updated / iteration

# blocks# iterations

implies

Theorem [Fercoq & R. 12/2013]

Special Case: Fully Parallel Variantall blocks are updated in each iteration

# normalized weights (summing to n)

# iterations

implies

New Stepsizes

Expected Separable Overapproximation (ESO):How to Choose Block Stepsizes?

P.R. and Martin Takac. Parallel coordinate descent methods for big data optimization, arXiv:1212.0873, December 2012Olivier Fercoq and P.R. Smooth minimization of nonsmooth functions by parallel coordinate descent methods, arXiv:1309.5885, September 2013P.R. and Martin Takac. Distributed coordinate descent methods for learning with big data, arXiv:1310.2059, October 2013

SPCDM

Assumptions: Function f

Example:

(a)

(b)

(c)

Visualizing Assumption (c)

New ESO

Theorem (Fercoq & R. 12/2013)

(i)

(ii)

Comparison with Other Stepsizes for Parallel Coordinate Descent Methods

Example:

Complexity for New Stepsizes

Average degree of separability

“Average” of the Lipschitz constants

With the new stepsizes, we have:

Work in 1 Iteration

Cost of 1 Iteration of APPROX

Assume N = n (all blocks are of size 1)and that

Sparse matrixThen the average cost of 1 iteration of APPROX is

Scalar function: derivative = O(1)

arithmetic ops

= average # nonzeros in a column of A

Bottleneck: Computation of Partial Derivatives

maintained

PreliminaryExperiments

L1 Regularized L1 Regression

Dorothea dataset:

Gradient Method

Nesterov’s Accelerated Gradient Method

SPCDM

APPROX

L1 Regularized L1 Regression

L1 Regularized Least Squares (LASSO)

KDDB dataset:

PCDM

APPROX

Training Linear SVMs

Malicious URL dataset:

Importance Sampling

with Importance Sampling

Zheng Qu and P.R. Accelerated coordinate descent with importance sampling, Manuscript 2014P.R. and Martin Takac. On optimal probabilities in stochastic coordinate descent methods, aXiv:1310.3438, 2013

Convergence Rate

Theorem [Qu & R. 2014]

Serial Case: Optimal ProbabilitiesNonuniform serial sampling:

Optimal ProbabilitiesUniform Probabilities

Extra 40 Slides

top related