se 435 – distributed systems marshalling – background principles version 2.0 clark elliott...
TRANSCRIPT
SE 435 – Distributed Systems
Marshalling – Background Principles
Version 2.0
Clark ElliottDePaul University
Preliminary slides cut from excellent materials at:
http://www.cs.wpi.edu/~fcco/classes/cs4515-2005/lectures/cs4515-06.ppt
By Fernando C. Colon Osorio
Augmented by Elliott
Some Essential Background Some Essential Background PrinciplesPrinciples
From transistors to memory Binary Arithmetic Designing the ALU – arithmetic and logic not just
arithmetic From bit strings to architectures Process Memory is only on loan Processes, context switching, symbol tables Marshalling Summaries
Instruction Register with ClockInstruction Register with Clock
.
Let the register settle, then pull the trigger
BUS A BUS B
DECODE UNIT
(INSTRUCTIONBOX – IBOX)
ARITHMETIC &LOGIC UNIT – EXECUTE UNIT
(ALU – EBOX)
T0
T0 + 1 CYCLE T0 + 2 CYCLES
T0 + 3 CYCLES
DECODE 1INSTRUCTION
EXECUTE 1INSTRUCTION
MEMORY UNIT
(STORED PROGRAM)
FETCH 1INSTRUCTION
T0
What’s Missing?
I/O: both input and output
Bits are just bits -- they have no inherent meaning Conventions define relationship between bits and
numbers, bits and characters, etc.
1010 could mean one eight and one two. 1010 could mean one two, but it is negative 1010 could mean zero eights, one four, zero twos,
and one one. 1010 could mean the letter “a” 1010 could mean…
That is, until we lay some interpretation over a string of bits they have no value to us.
Symbolic ValuesSymbolic Values
Binary numbers (base 2) 0000, 0001, 0010, 0011, 0100, 0101 0110 0111
1000 1001... decimal: 0, 1, 2, 3,...2n-1 [In this case, n=4]
Of course it gets more complicated:numbers are finite (overflow)fractions and real numbers --- (Introduce
Floating Point)negative numbers
So, how do we, e.g., represent negative numbers?That is, which bit patterns will represent
which numbers?
NumbersNumbers
32 bit signed numbers:
0000 0000 0000 0000 0000 0000 0000 0000two = 0ten
0000 0000 0000 0000 0000 0000 0000 0001two = + 1ten
0000 0000 0000 0000 0000 0000 0000 0010two = + 2ten
...0111 1111 1111 1111 1111 1111 1111 1110two = + 2,147,483,646ten
0111 1111 1111 1111 1111 1111 1111 1111two = + 2,147,483,647ten
1000 0000 0000 0000 0000 0000 0000 0000two = – 2,147,483,648ten
1000 0000 0000 0000 0000 0000 0000 0001two = – 2,147,483,647ten
1000 0000 0000 0000 0000 0000 0000 0010two = – 2,147,483,646ten
...1111 1111 1111 1111 1111 1111 1111 1101two = – 3ten
1111 1111 1111 1111 1111 1111 1111 1110two = – 2ten
1111 1111 1111 1111 1111 1111 1111 1111two = – 1ten
MIPSMIPS
max integer
min integer
Negating a two's complement number: invert all bits
and add 1
remember: “negate” and “invert” are quite different!
Converting n bit numbers into numbers with more than
n bits:
Vax-11, ALPHA, MIPS & Most other ISA’s, 16 bit immediate
gets converted to 32 bits for arithmetic
copy the most significant bit (the sign bit) into the other bits
0010 -> 0000 0010
1010 -> 1111 1010
"sign extension" (lbu vs. lb)
Two's Complement OperationsTwo's Complement Operations
Just like in grade school (carry/borrow 1s) 0111 0111 0110+ 0110 - 0110 - 0101
Two's complement operations easy subtraction using addition of negative numbers
0111+ 1010
Overflow (result too large for finite computer word): e.g., adding two n-bit numbers does not yield an n-bit
number 0111+ 0001 note that overflow term is somewhat
misleading, 1000 it does not mean a carry “overflowed”
Addition & SubtractionAddition & Subtraction
Two Positive#’s Result in
negative
OverfloOverfloww
Examples: 7 + 3 = 10 but ... - 4 - 5 = - 9 but ...
2’s ComplementBinaryDecimal
0 0000
1 0001
2 0010
3 0011
0000
1111
1110
1101
Decimal
0
-1
-2
-3
4 0100
5 0101
6 0110
7 0111
1100
1011
1010
1001
-4
-5
-6
-7
1000-8
0 1 1 1
0 0 1 1+
1 0 1 0
1
1 1 0 0
1 0 1 1+
0 1 1 1
110
7
3
1
– 6
– 4
– 5
7
Recall:
Key arithmetic: 12 + 12 = 102 or 02 and carry the 12
Consider a logic function to implement above. See logical table:
A B C
0 0 00 1 11 0 11 1 0 and carry the 1
Show an implementation consisting of inverters, AND, and OR gates.
Let’s Design an ALULet’s Design an ALU
0101 0011
+ 0011 +1111
1000 0010
510 + 310 =310 - 110 = 310 + (- 110)
A B Sum Carry (cout)
0 0 0 00 1 1 01 0 1 01 1 0 1
Let’s Design an ALULet’s Design an ALU
Sum:
a
b
1-bit adder
Sum
Cout
Sum
Cout
b
a
sum = a b + a b = XOR
cout = a b
A B cin Sum Carry (cout)
0 0 0 0 00 1 0 1 0
1 0 0 1 01 1 0 0 10 0 1 1 0 0 1 1 0 11 0 1 0 11 1 1 1 1
Let’s Design an ALULet’s Design an ALU
Sum:
1-bit adder
Sum
Coutb
a
sum = a b c + a b c + a b c + a b c
= a XOR b XOR cin
cout = a b + a cin + b cin
C in
Building a 32 bit ALUBuilding a 32 bit ALU
b
0
2
Result
Operation
a
1
CarryIn
CarryOut
Result31a31
b31
Result0
CarryIn
a0
b0
Result1a1
b1
Result2a2
b2
Operation
ALU0
CarryIn
CarryOut
ALU1
CarryIn
CarryOut
ALU2
CarryIn
CarryOut
ALU31
CarryIn
Adder function
Assembly Language From C codeAssembly Language From C code
/* File is junk.c */int main(int argc, char **argv){ int x; x = 3 + 2; return (0);}-------------------------------------
> gcc -s junk.c
Intel WindowsAssembly Language outputIntel WindowsAssembly Language output
.file "junk.c"gcc2_compiled.:___gnu_compiled_c:.text
.align 2.globl _main_main:
pushl %ebpmovl %esp,%ebpsubl $4,%espcall ___mainmovl $5,-4(%ebp) <-- compiler smart enough to create a constantxorl %eax,%eaxjmp L1.align 2,0x90
L1:leaveret
Intel Windows Assembly Language Intel Windows Assembly Language outputoutput
.file "junk.c"gcc2_compiled.:___gnu_compiled_c:.text
.align 2.globl _main_main:
pushl %ebpmovl %esp,%ebpsubl $4,%espcall ___mainmovl $5,-4(%ebp) <-- compiler smart enough to create a constantxorl %eax,%eaxjmp L1.align 2,0x90
L1:leaveret
Intel Linux Assembly Language outputIntel Linux Assembly Language output
.file "junk.c" .text.globl main .type main,@functionmain: pushl %ebp movl %esp, %ebp subl $8, %esp andl $-16, %esp movl $0, %eax subl %eax, %esp movl $5, -4(%ebp) movl $0, %eax leave ret.Lfe1: .size main,.Lfe1-main .ident "GCC: (GNU) 3.2.2 20030222 (Red Hat Linux 3.2.2-5)"
Sun Assembly Language outputSun Assembly Language outputmain
.PROC
.CALLINFO FRAME=128,CALLS,SAVE_RP,SAVE_SP,ENTRY_GR=3
.ENTRYstw %r2,-20(%r30)copy %r3,%r1copy %r30,%r3stwm %r1,128(%r30)stw %r26,-36(%r3)stw %r25,-40(%r3).CALL bl __main,%r2nopldi 5,%r19stw %r19,8(%r3)ldi 0,%r28ldw -20(%r3),%r2ldo 64(%r3),%r30ldwm -64(%r30),%r3bv,n %r0(%r2).EXIT.PROCEND
Different Systems / Different architecturesDifferent Systems / Different architectures
All architectures implement data types, and instructions.
How they implement them is up to the designers of the underlying hardware (and also the operating system).
Data types from one system may be implemented far differently from another system, even though symbolically they are the same.
This is why Windows programs are often shipped as binary files ready to run (same Intel architecture) but Unix programs must fist be compiled on each machine (different architectures – Sun, Intel, HP, IBM…).
Different Systems / Different Different Systems / Different ArchitecturesArchitectures
Distributed systems that interoperate between machines by shipping data and code to different processors must take into account the differences inArchitectures.
The process of translating bit representations from one system to another (as well as some intermediate state) is called marshalling and unmarshalling.
With multicomputer Distributed Systems we often think of marshalling data from the source system into a serial form for network transmission, and then unmarshalling it from the network into the destination format.
Virtual Memory -- ouchVirtual Memory -- ouch
But wait – that’s not all!
We forgot about memory addressing.
Real addresses – just temporaryReal addresses – just temporary
Note that in NONE of the assembly code is there any reference to the variable “x”.
X is a symbolic name which is translated into the address of a piece of memory big enough to hold an integer. (4 bytes? 8 bytes?)
At run time the loader assigns a temporary location in memory which will hold the value of x. We then refer to x by the address of that piece of memory.
But – If the operating system needs the memory it will write the whole (running) program out to disk for a short time, then load it back in again later, often at a different location.
Memory – it’s ephemeral!Memory – it’s ephemeral!
When studying Distributed Systems we must have an understanding of concepts such as:
ArchitecturesContext Switching – process control blocksData formatsMemory AddressingDynamically linked data structures
http://condor.depaul.edu/~elliott/420/ppt/memory/
http://condor.depaul.edu/~elliott/420/ppt/memory/MIPS-memory.pdf
http://condor.depaul.edu/~elliott/420/ppt/memory/MemoryAddressing.ppt
Memory – only on loan when you Memory – only on loan when you need itneed it
When a process is swapped out it’s memory might be needed by other processes. When this happens, all of the real addresses become meaningless, and will have to be replaced by new addresses when the process is swapped back in. The operating system keeps symbol tables to know where the value of a program’s symbols (remember “x”?) reside. When a process is restored to the CPU so that it can continue running, all of the values of its symbols (variables…) are retrieved from disk and placed back into new temporary memory locations.
A new symbol table is created to bind the symbols to real memory locations.
The program counterThe program counterA process is a program in execution. That is, the step-by-step execution of a set of instructions, in order.
At any given moment, the process is executing one instruction. This instruction was loaded into the instruction register from a location in memory. That memory location is stored in the program counter, or “PC”.
The program counter is incremented, and the instruction at the next memory location is loaded.
When a process is swapped out, the PC is saved; when restarted the PC (updated to point to the new memory locations) tells us at which instruction to start.
No symbol table on remote No symbol table on remote machinemachine
Generally speaking the symbol table that the operating system keeps for our local process allows us to relocate linked structures in memory after a context switch.
However, a remote system has no symbol table for the local process.
For this reason, marshalling can get very complex, if not impossible, for dynamically linked structures.
Summary -- bitsSummary -- bits
Transistors put out 5 volts, or no volts.Combinations of transistors give us persistent memory.
Symbolically, we see a memory location as a bit: either 1 or 0
Bit strings do not mean anything without an interpretation. Interpretations are arbitrary.
System architects, who generally are interested in speed, speed, and speed, tell us what the bit strings mean.
Summary –- bit stringsSummary –- bit strings
Some bit strings are valid instructions, and get loaded into the instruction register.
Logic gates, such as AND, OR, and NOT, are used to translate instruction bit strings into action.
Other bit strings are data and are manipulated in CPU registers by the instructions.
Different machines have different architectures – interpret the bit strings differently.
Summary -- memorySummary -- memory
Data and instructions are stored in memory.
At link-time all symbolic addresses are translated into offsets from the beginning of the program.
Symbolic values, such as variables, are translated into real addresses in memory at run time by adding offsets from the starting address of the program’s temporary memory location.
Processes share the CPU. When a process is swapped out it may have to give up its memory.
Summary – symbol tablesSummary – symbol tables
The operating system keeps a symbol table for each process, which binds symbolic values in a program to real memory locations. When a process is restored from disk, back into real memory, the symbol table is updated.
The program counter is also restored –- but adjusted to the new location of the program in memory -- and execution continues where it left off.
Summary -- architecturesSummary -- architectures
Assembly language is generally translated line for line into bit strings.
The same C program will produce very different assembly language for different machines because the underlying architecture (meaning of the bit strings) is very different.
Summary -- Marshalling is hard.Summary -- Marshalling is hard.
Shipping programs, and data, from one machine to another in a distributed system means translating bit strings on one machine into some (usually serial) agreed-upon external format, and then translated again into (possibly very different) bit strings at the other end.
Data formats, and instruction formats, may be different. Memory addresses are meaningless. Symbol tables do not exist on the remote system.
In addition, it is possible that no general algorithm exists for marshalling dynamically linked data structures.