scaleable ubiquitous systems “3 should be enough – 1 or 2 in the us and 1 in england”…(ok,...

Post on 03-Jan-2016

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Scaleable Ubiquitous Systems

“3 should be enough – 1 or 2 in the US and 1 in England”…(ok,

Scotland)…

Scaleable Ubiquitous Systems Research

Some CS Systems Grand Challenges

Jon.crowcroft@cl.cam.ac.ukhttp://www.cl.cam.ac.uk/~jac22

Grand Challenges

• UK CRC (CPHC, RC, BCS, IEE) prompted by US

• List 100s of …select >= 0• CPHC Conf in Spring 04 will say more• On Web Site…(google…)• Multipurpose – just get $? Well, no – but • E.g. Put a robot on mars, bring it back

alive…

This challenge…

• Computers will vanish…into the material world• Need to be as robust or predictable as a table or

chair is or more so…• Will be networks (embedded systems to date are

not) • Like insects, there are as many of these per per

person, as there are people• i.e. Internet ^ 2, or an Internet of Interstitial

Internets

Lets scope the problem to 10 year

• So we don’t go mad• What can we realistically achieve?• Btw, pervasive, ambient, sentient, ubiquitous,

vanishing, invisible computing is not a new “challenge”

• It may not be a “challenge” per se at all: for example, merely sustaining Moore’s Law (and equivalent for storage and network capacity) is not regarded as a challenge by other science and engineering disciplines, although we all know it is

Ubiquity&Environment 1

Assumptions – extend scale down and upFirst Down:• Epsilon power• Limited traditional resources (comms, cpu,

memory)• Very low (range, and variability) MTBF, high

MTTR of components• But: do assume a large“infrastructure” nearby• (i.e. not, absolutely NOT, ad hoc self org)

Ubiquity&Environment 2

Assumptions – extend scale down and up

And then up:

• 1000 powered devices per person

• Terabytes of store per person per year

• Fixed network capacity Gbps per person 24*7

• MTBF 100 year line, disk, cpu – fault tolerance models need to be extended

Criteria for Ubiquitous Systems• Programmable by “naïve” user • Operate continuously in evolving environment• Scale to 10^12 nodes – measure \(IMW) ubiq.• Trustworthy (“proof of behaviour” carrying code) –

I.e. use Theory – e.g. stochastic process algebras• Sustainable, Adaptable, Multipurpose• Concept of Transparencies revised from ODP…

accuracy+certainty derived from…theory of…expresses principles (e.g. e2e, parsimony, soft state, etc etc…)

Benefits of Ubiquitous Systems

• Empowers some users (c.f. normal humans)• Empowers some applications• I.e. economic in some sense, although for some

(e.g. support for field work, whether science, medicine or what have you) not doable

• Incorporate range of intermediaries for pipelines of transforms (lego like) etc

• Lots of Human Computer Interaction research required

Needs from CS &other disciplines

• theory – need to accommodate theory of ubi systems – e.g. location, reliable, e.g. Need to work between network calculi and process calculi

• Users need to tell us what new model of transparencies are!!!

• exemplars act as a clock for systems work…

Information Theory applies• Code must be executed redundantly ,but not greedily!!)• Data must be stored (and moved) in some FEC manner

(c.f. eternity/pastry etc) (DHT, Rabin Fingerprint, stripe/wraid)

• Transmission need to deal with lowering overheads, adapting to circumstance and converging in determined time (e.g. routing etc)

• Aggressive use of intermediaries to cache, transform –• Power, location, identity and ownership are a first class

member of (meta) data, code, node, etc

System is reflective

• Can monitor and reason about (and report) its own behaviour and esp. performance (e.g. accuracy, timeliness etc) and failures

• Can reason about trust in system from h/w up (viz TCB will be here before mid way through GC)

• Goal is Highly Optimized Tolerance (see Doyle) – we cannot afford phone net engineering

• Control theory needs to be extended• Perhaps use automated reasoning etc etc

Other problems…

• Couple of other thoughts about meta-problems • Need location of data and code as 1st class tag; but

include: provenance (principles/capabilities) in some (many?) applications to feed into user confidence. TRUST

• Need some sort of system economics (of combined data, processing and communications) AFFORDABILITY

Some bits we are doing…

1. Economical Security - stochastic mechanism design!

2. Containment - “not-quite” location based computing

3. Mobility Models and IMW for Ubi

4. Mixed Reality Processing Models

5. Net/OS

1 Economics bit…

• Mechanism design deals with aligning incentives• But often assumes perfect information in market• We have a model for providerless networks, using

dual algorithm (c.f. kelly, tomorrow)• For battery, transmit, store, cpu etc…• Needs more theory to cope with uncertainty

otherwise not strategy proof (or needs possibly complex witness/enforcement system)

2 Containment

• We attach containment (position relative to physical surroundings) as a first class attribute of all objects (OS, Device)

• All I/O is typed w.r.t transmission media (IR, RF, Visible, Audio, etc), and so are containers

• Seems quite nice model for reasoning about privacy for example

3 Mobility Models

• Most paperson mobile use extremely naïve mobilty model (random)We have data from road and pedestrian monitors (real time (1 sec) road every 500 yards of whole UK, pedetrian - anonymized film of streets of much of London with head tracking!)

• …but its rather a lot to process right now!

4 Mixed Reality Processing

• Esp. good for sensor nets• Embed physical model in high level program• Position/locate sensors - compile program into set

of functions for delta encoding• Note most physical signals travel@speed<<C but

net can propagate info (via directed diffusion) at C, so can then just compute difference on model and only send delta above threshold - nice for handling impluses and congestion

5 Operating System&Comms “Stack”

• What is minimal unit we “boot” – pico/nano hypervisor (xen) kernel

• What is the VM of a set or subset of nodes that present a traditional “computer” to the world (I.e. is hierarchical decomposition of function still ok or do we need new model (components in a graph combine to provide overall service – e.g. store an object, deliver a sense…etc)

• Operating Systems need to be revisited (symbian exploded?)

Exemplars

• “See sentient building, street, hospital, city, planet, solar system, etc– Path lab with sensors – 0 road deaths– Cradle-to-grave health record indexed by location and

time (epidemiology, evidence based health, pathology, remote intervention)

– As scale increases, start to move to 100% environment awareness (see Snow Crash), and beyond

Conclusions

• “The future is bright, the future is…”• Not windows, IP, intel• Whole new business in processor, OS,

network – Science (and then engineering)• More importantly, whole new models for

systems, including a science, not just engineering, and including people, not just technology.

top related