from web 2 to web 3

33
DR ASHE R IDAN , 0505288 739 FROM WEB 2.0 TO WEB 3.0

Upload: asher-idan

Post on 17-May-2015

1.521 views

Category:

Technology


3 download

DESCRIPTION

From Social Networks to Artificial Neural Networks. How NeuroMorphic Computation will Solve the big problems of Big Data and the Internet of Things, in the age of PostProgrraming

TRANSCRIPT

Page 1: From web 2 to web 3

DR ASHER ID

AN, 0505288739

FROM W

EB 2.0

TO W

EB

3.0

Page 2: From web 2 to web 3

THE THREE BASIC LAWS OF

THE SOCIAL NETWORKS

Page 3: From web 2 to web 3

Links Users

2

3

4

1

3

6

NN*N/2

1050

1. Metcalfe Law: The value of  telecommunications network is proportional to the square of the number of connected users of the system (n2)

Page 4: From web 2 to web 3

2 .Shirky Law: The Transaction Costs and the Collaboration Costs Between Individuals became much Lower than the Organizational Costs

Page 5: From web 2 to web 3

3. Axelrod Law: The Trust Between Users is Prportional to the Number and Frequency of the Iterations Between Them

Page 6: From web 2 to web 3

Web 3.0: Big Data and Neural Networks

Page 7: From web 2 to web 3

Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

Page 8: From web 2 to web 3

Google has been working on ways to use machine learning and deep neural networks to solve some of the toughest problems Google has, such as:

1.Natural language processing, 2.Speech recognition, 3.Computer vision4.Ranking

Page 9: From web 2 to web 3

A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables

Page 10: From web 2 to web 3

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes.

If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog.

This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

Page 11: From web 2 to web 3

Neural nets (networks of functions that behave like neurons in the human brain) have been around for a long time, since the late '60s, but they're coming back into vogue for several reasons.

1.Neural nets, especially deep ones, is that they build features that describe the data well automatically, without humans having to get involved. 2.There's a lot more computational power available, 3.A lot more labeled data, 4.People have figured out how to train very deep networks. Until four or five years ago, it was impossible to get more than like a three-layer network to train well because, since each computer neuron is a non-linear function, as you get deeper and deeper its output gets more and more irregular. It's a very difficult optimization process the deeper the network is. But people have now figured out ways around that. You can pre-train on the first layer, do your optimization there, get it into a good state, and then add a layer. You can kind of do it layer by layer now.

Page 12: From web 2 to web 3

Neuromorphic Computing

1.Low power consumption (human brains use about 20 watts, whereas the supercomputers currently used to try to simulate them need megawatts); 2.Fault tolerance (losing just one transistor can wreck a microprocessor, but brains lose neurons all the time); 3.A lack of need to be programmed (brains learn and change spontaneously as they interact with the world, instead of following the fixed paths and branches of a predetermined algorithm).

Page 13: From web 2 to web 3

Money is starting to be thrown at the question. 1.The European Human Brain Project has a €1 billion ($1.3 billion) budget over a decade. http://www.humanbrainproject.eu/

1.The American BRAIN initiative’s first-year budget is $100m, http://www.nih.gov/science/brain/index.htm

Page 14: From web 2 to web 3

Two of the most advanced neuromorphic programmes are being conducted under the auspices of the Human Brain Project (HBP):

1.One, called SpiNNaker. It is a digital computer—ie, the sort familiar in the everyday world, which process information as a series of ones and zeros represented by the presence or absence of a voltage. It thus has at its core a network of bespoke microprocessors. . To test the idea they built, two years ago, a version that had a mere 18 processors. They are now working on a bigger one. Much bigger. Their 1m-processor machine is due for completion in 2014. With that number of chips, Dr Furber reckons, he will be able to model about 1% of the human brain.

2.The other machine, Spikey, harks back to an earlier age of computing. Several of the first computers were analogue machines. These represent numbers as points on a continuously varying voltage range—so 0.5 volts would have a different meaning to 1 volt and 1.5 volts would have a different meaning again. In part, Spikey works like that. Analogue computers lost out to digital ones because the lack of ambiguity a digital system brings makes errors less likely. But Dr Meier thinks that because they operate in a way closer to some features of a real nervous system, analogue computers are a better way of modelling such features.

Page 15: From web 2 to web 3

Boeing and General Motors

Narayan Srinivasa, the project’s leader, says his neuromorphic chip requires not a single line of programming code to function. Instead, it learns by doing, in the way that real brains do.

An important property of a real brain is that it is what is referred to as a small-world network. Each neuron within it has tens of thousands of synaptic connections with other neurons. This means that, even though a human brain contains about 86 billion neurons, each is within two or three connections of all the others via myriad potential routes.

Page 16: From web 2 to web 3

The other SyNAPSE project is run by Dharmendra Modha at IBM’s Almaden laboratory in San Jose. In collaboration with four American universities (Columbia, Cornell, the University of California, Merced and the University of Wisconsin-Madison),

he and his team have built a prototype neuromorphic computer that has 256 “integrate-and-fire” neurons—so called because they add up (ie, integrate) their inputs until they reach a threshold, then spit out a signal and reset themselves.

In this they are like the neurons in Spikey, though the electronic details are different because a digital memory is used instead of capacitors to record the incoming signals.

Page 17: From web 2 to web 3
Page 18: From web 2 to web 3

From Web 2.0 to Webs 3.0 and 4.0: Swarms of things

Page 19: From web 2 to web 3
Page 20: From web 2 to web 3

Quantum

Page 21: From web 2 to web 3
Page 22: From web 2 to web 3
Page 23: From web 2 to web 3
Page 24: From web 2 to web 3
Page 25: From web 2 to web 3

The Mulecular level: Web 4.0

Nanotech

+

Biotech

+

Neurotech

Page 26: From web 2 to web 3

Nanotech 1 ,Pizza iPhone Video

Page 27: From web 2 to web 3

Nanotech 2: From PC to PM (Personal Manufactoring) , Baby Video, 5th element

Page 28: From web 2 to web 3

Neurotech: Cyberkinetics' BrainGate brain-computer interface consists of a computer chip that is a 2-mm-by-2-mm, 100-electrode array. Surgeons attach the array like Velcro to neurons into the motor cortex. The electrodes send information from 50 to 150 neurons at once, traveling through a fiber-optic cable to a device about the size of a VHS tape (seen on back of wheelchair) that digitizes the neuronal signals. Another cable from the digitizer runs to a computer that translates the signal.

Page 29: From web 2 to web 3
Page 30: From web 2 to web 3
Page 31: From web 2 to web 3

Biotech

Page 32: From web 2 to web 3

Web 5.0

Microtubules are protein structures found within cells. They have diameter of ~ 24 nm and varying length from several micrometers to possible millimeters in axons of nerve cells.Roger Penrose has proposed a theory of the quantum mind in which the hollow cores of microtubules inside neurons form an environment capable of supporting quantum-scale information processing

and conscious awareness.

Page 33: From web 2 to web 3

From Disk on Key to Hospital on KeyAn Integrated Digital Microfluidic Lab-On-A-Chip for Clinical Diagnostics on Human Physiological Fluids