how machine learning is changing the world
TRANSCRIPT
How Machine Learning is Changing the WorldDEEP LEARNING WITH TENSORFLOW
Emilio Garcia@unindanachado
Agenda
● Motivation● Key concepts
○ AI, ML & DL● Brief revision on ANN
○ Neurons and Layers○ Activation and Loss Functions○ Optimization
● Deep Learning○ Convolutions○ Architectures
● TensorFlow Basis● Demo Time
Key Concepts
Artificial Intelligence
Machine Learning
Deep Learning
“any technique that enables computers to
mimic human intelligence"
“subset of AI that includes abstruse
statistical techniques that enable machines
to improve at tasks with experience"
“algorithms that permit software to train itself
to perform tasks”
Deep Learning = Convolutional Neural Networks
Little History of Neural Networks
1943: McCulloch, W. and Pitts, W. first introduced the idea of a neural network.
1958: Rosenblatt, F introduced backpropagation.
2006: Hinton, G provided a radical new way to train deep neural networks.
Today: Graphic Process Units allow programmers to train networks with several layers.
Float Array
A Typical Neural Network
Input Layer
Hidden Layers (black box)
Output Layer
[input pattern] [output pattern]
many different architectures define the interaction between the input and the
output layer
Float Array
[ -0.025, 0.23, 0.44 ] [ 0.712, 0.471 ]
Considerations
● Input data has to be normalized● Most types of ANNs don’t care about order in train data● Some others do care: BAM (Bidirectional Associative Memory)● Certain Types perform better in certain Problem Domains
Network Types & Problem Domains
Clust Regis Classif Predict Robot Vision Optim
Self-Organizing Map ●●● ● ●
Feedforward ●●● ●●● ●● ●● ●●
Boltzmann Machine ● ●●
Deep Belief Network ●●● ●● ●●
Deep Feedforward ●●● ●●● ●● ●●● ●●
Recurrent Network ●● ●● ●●● ●● ●
Convolutional Network ● ●●● ●●● ●●●
Deep Learning and Neural Networks (Jeff Heaton)
Node, Neuron, Unit
Input 1
Input 2
Input 3
Neuron
Activation Function
Output
Neuron output:
x: inputs
w: weights
: activation function
weight 1 weight 2 weight 3
Neuron Types
I1 I2 B1
B2
B3
N1
N1 N2
O1
N2
Input 1
Hidden1
Hidden2
Context1
Context2
Output1
w2w1
copycopy
w5
w3 w4
w6
Input
Output
BiasHidden
Context
Activation Functions
Linear Threshold
Also called transfer functions. They establish bounds for output of the neurons.
Some of the most popular include:
First used in the original perceptron (McCulloch & Pitts,
1943)
Commonly found in output layers of regression networks
Activation FunctionsSigmoid Hyperbolic Tangent
ReLU
Used to ensure that values are compressed between 0 and 1.
Values range from -1 to 1, mean remains 0. Antisymmetric AFs
yield faster convergence.
Linear, non-saturating function.
● Usually found in the output layer● Represents the probability that the input falls into each class
The Softmax Activation Function
i: index of the output neuron j: indexes of all neurons in the groupz: array of output neurons
Bias
● The weights of the neuron allow us to adjust the slope or shape of the activation function.
● Whereas Bias shift left/right the sigmoid curve.
f(x, 0.5, 0)f(x, 1.0, 0)f(x, 1.5, 0)
f(x, 1.0, 1.0)f(x, 1.0, 0.5)f(x, 1.0, 1.5)
What about convolutions?
“In image processing, a kernel, convolution matrix, or mask is a small matrix. It is useful for blurring, sharpening, embossing, edge
detection, and more. This is accomplished by means of convolution between a kernel and an image.”
-wikipedia-
What about convolutions?
What about convolutions?
What about convolutions?
What about convolutions?
Deep Convolutional Neural Network
Deep Convolutional Neural Network
13 Layer CNN - Alex Krizhevsky (2012)
22 Layer CNN - GoogLeNet: Inception v3 (2014)
Learning to Refine Object Segments - Pedro O. Pinheiro
DeepMask and SharpMask
Demo Time https://github.com/raphsoft/samples/tree/master/meetup/santex-deeplearning
Other Real-World Applications
● Self-Driving Cars● Medical Image Analysis● Bioinformatics● Industry:
○ Churn Prediction○ Sentimental Analysis○ Chatboots○ Recommendation Systems○ Financial Evaluation
● Politics● Security
Questions
Recommended Material & Contact Info
Pattern ClassificationRichard O. Duda
ISBN-13: 978-0471056690ISBN-10: 0471056693
Personal (Work and Academic):[email protected]@pucp.edu.pe
GRPIAA:http://inform.pucp.edu.pe/~grpiaa/ https://www.facebook.com/grpiaa
Thanks!
We support WarmiLab, join us!
https://www.facebook.com/WarmiLab