multiverse - innovarioja.tv · neural networks a well-known approach for training a deep neural...

20
MULTIVERSE COMPUTING

Upload: others

Post on 18-Aug-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

MULTIVERSECOMPUTING

Page 2: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

We apply Extreme Quantum Computing to Finance, now

And we mean it! Let us show real proposals for using a Quantum Annealer in Finance, today

A Presentation from Multiverse Computing

Page 3: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Four examples from our portfolio

Quantum, Computer, Finance & Feasibility

Page 4: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Four real possibilities, now

Feature

selection in

Credit

Scoring

Credit Scoring related problem (Banking)

Optimal

Trading

Trajectories

Asset-Trading related problem (investment / Treasury)

Best

arbitrage

opportunity

detection

Asset-Trading related problem (investment / Treasury)

All the possibilities will be adapted to a D-Wave Quantum AnnealerAll the possibilities have been previously proved as feasible, but the solution will be put on steroids to make this work remarkable.

Neural

Network

training in

Credit

Scoring

Credit Scoring related problem (Banking)

Page 5: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Optimal Trading● The goal is to decide how to invest an amount of euros in a set of assets

with an investment horizon divided into several time steps.

● Given a forecast of future returns and the risk of each asset at each time step, the asset manager must decide how much to invest in each asset, at each time step, while taking into account transaction costs, including permanent and temporary market impact costs

Based on: Solving the Optimal Trading Trajectory Problem using a Quantum Annealer; Gili

Rosenberg, Phil Goddard, Poya Haghnegahdar, Peter Carr, Kesheng Wu, Marcos López de Prado

Page 6: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Best Arbitrage Opportunity ● The goal is to find the most profitable arbitrage opportunity and near-bests.

● The difference between arbitrage detection and finding the best arbitrage opportunity can be large. Consider an example with two arbitrage opportunities, one with a tiny profit and the other with a huge profit. A trader would be interested in the larger profit, but the usual approach stops when it finds the first arbitrage opportunity. The approach is to find not only the most profitable arbitrage opportunity but also others that are near-best

Based on: Finding optimal arbitrage opportunities using a quantum annealer; Gili Rosenberg

Page 7: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Credit Scoring feature selection● The goal is to build a small set of features to be used as a base for a Credit

Scoring system.

● In Credit Scoring, feature selection is used to reduce

the number of variables used as input. This can be

done with a quadratic unconstrained binary

optimization (QUBO) model on a quantum annealer

running faster than classical solvers, and yielding a

smaller feature subset tan with other techniques, with

no loss of accuracy.

Based on: Optimal feature selection in credit scoring and classification using a quantum

annealer; Andrew Milne, Maxwell Rounds, and Phil Goddard

Page 8: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Credit Scoring Deep Neural Networks training● The goal is to create a small but working Credit Scoring system based on

Neural Networks

● A well-known approach for training a Deep Neural

Network starts by training a generative Deep Belief

Network model. However, the training can be time-

consuming. We may use an alternative way (Restricted

Boltzmann Machines using samples from a D-Wave

annealer) for the training of a Credit Scoring system

Based on: M. Benedetti, J. Realpe-Gómez, R. Biswas, and A. Perdomo-Ortiz, Estimation of effective

temperatures in quantum annealers for sampling applications: A case study with possible applications in

deep learning

Page 9: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

The technical partHow the algorithms actually work, and are able to perform much better than classical ones

Page 10: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

● QUANTUM SUPERPOSITION

A quantum processor can be in many different states simultaneously. When I tell my processor to run an operation, it runs in parallel on each state my system is in. The number of available states grows exponentially with the number of qubits ⇒ every time I add a qubit to my processor, the number of operations I can run in parallel doubles!

● QUANTUM ENTANGLEMENT

Qubits are correlated quantumly amongst themselves, in a way that is simply impossible for classical computers! This allows much more powerful computations.

What makes quantum computers faster?

Page 11: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

What is Quantum Annealing?

It‘s a tool to solve

OPTIMIZATION

PROBLEMS

In physics:

ENERGY MINIMIZATION

e.g., protein folding

Toy example:

x1, x2, x3= 0,1 (bits); Which configurations satisfy

x1 + x2 + x3 = 1?

(3-bit instance of “Exact Cover” problem, NP-Complete)

Solutions: (0,0,1), (0,1,0), (1,0,0)

As an optimization problem: find the minimum of

QUBO formula

(QUadratic Binary Optimization)

f(x1, x2, x3) = ( x1+ x2+ x3-1)2 =

2x1x2 + 2x2x3 + 2x1x3 - x1 - x2 - x3 + 1

Page 12: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

QUBO as a physics problem

qubitbit

Pauli matrices... like „tiny quantum

magnets“

0 1 0+1

Superconducting flux

qubits: the tiny magnet

is created by

superconducting currents

Landscape of the cost function

(0,0,1) (0,1,0) (1,0,0)

f(x1,x2,x3

)

configuration

One maps the cost function to a quantum Hamiltonian (energy operator):

Energy landscape = eigenvalues of HP

Physical problem: find the lowest-energy

(ground) state of an interacting magnet.

Extremely difficult in general!!!

Page 13: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Start in a quantum superposition of 0 and 1

for all the qubits, ground state of

Slowly (i.e., adiabatically) interpolate between H0

and HP, by controlling magnetic fields and interactions

?If slow enough, in the end there is a large

probability of being at the ground state of HP

Measure individual qubits after interpolation

(0,0,1)

Repeat process many times,

and choose the best outcome

(non-ideal conditions may

imply near-optimal solutions) Classical state (string of 0‘s and 1‘s)

? ?complicated entangled wave function

Strategy: adiabatic quantum computation+ + +

Page 14: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Why faster than classical?

Quantum Tunneling

Thermal fluctuation

Slow to get out from traps

Classical annealing

Weak thermal fluctuation but STRONG quantum tunneling

Quantum annealing Much faster to get out!!

Page 15: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

How to train neural networks using quantum annealersThe key to success of unsupervised learning relies on breakthroughs in efficient sampling algorithms.

Idea: can we do this with a quantum annealer?

This is fundamentally different from using annealers to solve an optimization problem.

Page 16: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Boltzmann machinesD is a dataset with distribution

Boltzmann machine: models data as probability distribution

visible data, from the original dataset

unobserved data, introduced to capture higher structure in data

Aim: to find parameters Wi,j

and bi that make P as close as possible to Q.

We choose PB to be a Boltzmann distribution:

Finding P means we have an algorithm which “understands” patterns in the dataset.

Problem: while Q is very easy to construct and sample, P is very hard to sample, and convergence of Wi,j and bi is very slow.

Idea: Construct PB and measure data vectors si directly.

Page 17: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Training aBoltzmann machine

on a Quantum annealer

Multiple levels are populated in a real annealing process

In annealing, we deform H0 to HP

At the end of an ideal process, my system is in the groundstate of HP - In practise, coupling to the environment means higher energy levels are also populated.

The population of energy levels at the end of an annealing process is controlled by the Boltzmann distribution

By preparing HP so that it encodes the energy functional E(s), I can prepare and efficiently sample PB(s).

Once trained, my neural network can run on a standard computer.

Page 18: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

The TeamA seasoned Team, experts both in Quantum Computing and Finance, authors of the most visited Papers in QC & F (8.000 visits, and growing…)

Page 19: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

The TEAM

Enrique Lizaso4Román Orús1,2,3,4 Samuel Mugel4,5

Authors of “Quantum computing for finance: overview and prospects”, the most visited Paper on Quantum Computing & Finance1Institute of Physics, Johannes Gutenberg University, 55099 Mainz, Germany; 2Donostia International Physics Center, Paseo Manuel de Lardizabal 4, E-20018 San Sebastián, Spain; 3Ikerbasque Foundation for Science, Maria Diaz de Haro 3, E-48013 Bilbao, Spain; 4Quantum for Quants Commission, Quantum World Association, Barcelona, Spain; 5The Quantum Revolution Fund, Carrer de l’Escar 26, 08039 Barcelona,

Spain

Page 20: MULTIVERSE - innovarioja.tv · Neural Networks A well-known approach for training a Deep Neural Network starts by training a generative Deep Belief Network model. However, the training

Extreme Quantum Computing

For Financial Companies, To earn more money, while reducing

the risk