real time system 09 philip a lapalante 2nd edition

Upload: anshuljain77

Post on 30-May-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    1/19

    MultiprocessiSystem

    KEY POINTS OF THE CHAPTERL Building real-time multiprocessingsystems s hard becausebuildinguniprocessingeal-timesystemss alreadydifficult enough.2. Reliability in multiprocessing ystems an be increased hroughredun-dancy and multiplicity. However, security,processing,and reliabilitycostsareassociated ith the communicationinks betweenprocessors.3. Describing the functional behavior and design of multiprocessingsystems s difficult and requiresnontraditionaltools.4. It is crucial to understand he underlying hardware architectureof themultiprocessing ystembeing used.

    in this chapterwe look at issues elated o real-time systemswhen more than oneprocessor s used. We characteize real-time multiprocessingsystems nto twotypes: hose hat use severalautonomous rocessors, nd those hat use a largenumber of interdependent, ighly integratedmicroprocessors'Although many of the problemsencounteredn multiprocessingeal-timesystemsare the sameas those n the single-processing orld, theseproblemsbecomemore troublesome. or example,systemspecifications more difficult.Intertask communication and synchronizationbecomes nterprocessorcommu-nication and synchronization. ntegration and testing is more challenging. andreliability more difficult to manage.Combine thesecomplicationswith the factthat the individual processorshemselves an be multitasking.andyou can see helevel of complex ity.281

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    2/19

    282 Chap. 12 I MultiprocessingSystemswe canonly give a brief introduction o those ssues hatthe designof real-timemultrprocessmg ystems.In a singlechapterneed o be addressedn

    12.1 CLASSIFICATIONF ARCHITECTURESComputerarchitecturesanbe classifiedn termsof singleor multiple nstructionsstreams nd singleor multiple datastreams s shown n Table 12.1.By providinga taxonomy. t is easier o match a computer o an applicationand to rememberthe basiccapabiiities f a processor.n standard on Neumannarchitectures,hesenal etch and execute rocess, oupledwith a singlecombineddatalinstructionstore. orces serial nstruction and data streams.This is also the case n RISCr eciucednstruction set computer) architectures. lthough many RISC archi-tectures nclude pipelining, and hence become multiple instruction stream,pipeiinin-es not a requisite haracteristi cf RISC.

    T-{BLE l2.l Classificatio n or Com puterArchitectures

    Single Data Stream Multiple Data StreamSinele n:tr.rction tream\fu l t ip le ns t ruet iont r eam

    von Neumannarchitecture/uniprocessorsRISCPipeiinedarchitecturesVery long instruction word processors

    SystolicprocessorsWavefront processorsDataflow processorsTransputers

    In both systolic and u'avefront processors, ach processingelement isexecuting he same (and onll'.t instruction buf on different data. Hence thesearchitectures reSIMD.In pipelines architectures, ffectively more than one instruction can beprocessed imultaneously one for each evel of pipeline).However, since onlyone instructioncan use data at any one time, it is MISD. Similarly,very longinstructionword computers end to be implementedwith microinstructionshathave very long bit-lengths (and hence more capability). Hence, rather thanbreaking down macroinstructions nto numerous microinstructions,several(nonconflicting)macroinstructions an be combined nto severalmicroinstruc-tions.For example, f objectcodewasgeneratedhat called or a load one egisterfollowed by an increment of another register, these two instructions could beexecuted s imultaneously by the processor (or at least appear so at thernacroinstruction level) with a series of long microinstruction. Since onlynonconflicting instructions can be combined, any two accessing he data busconflict. Thus, only one instruction can access he data bus, and so the very longinstructionword computer s MISD.

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    3/19

    Sec. 12.2 I DistributedSystems

    Finally, in dataflow processors nd transputers see he foilowing discus-sion), each processingelement is capable of executing numerous differentinstructionand on differentdata;hence t is MINllD"Distributedarchitectures realso classifiedn this way.

    12.2DISTRIBUTEDYSTEMSWe characterize isn"ibuted eal-timesystemsas a coilection of interconnectedself-contained rocessors.We differentiate his type of system from the typediscussed n the next section n that each of the processorsn the distributedsystemcan perform significantprocessingwithout the cooperationof the otherprocessors.Many of the techniques eveloped n the contextof multiraskingsystemscan be applied o multiprocessing ystems. or example,by treatingeachof theprocessorsn a distributedsystemasa task, hesynchronizationndcommunica-tion techniques reviouslydiscussed an be used.But this is not alwaysenough,because ften eachof the processorsn a multiprocessing ystemare hemselvesmultitasking.n any case, his typeof distributed-processingystem epresentshebest solution o the real-timeproblemwhen such resources re available.

    12.2.1 EmbeddedEmbeddeddistributedsystemsare those n which the individual processors reassigned ixed, specific asks.This type of system s widely used n the areasofavionics,astronautics, nd robotics.r EXAMPLE2.1In an avionics system or a military aircraft, separate rocessorsare usually assigned or navigation,weaponscontrol, and communications.While thesesystemscertainly share nformation (seeFigure12.1), we can prevent failure of the overall system n the event of a single processor ailure. Toachieve this safeguard,we designate one of the three processorsor a fourth to coordinate theactivities of the others. f this computer is damaged,or shuts tself off due to a BITS fail, anothercan assume ts roie.

    12.2.2OrganicAnother type of distributedprocessingsystem consistsof a central schedulerprocessor nd a collection of generalprocessorswith nonspecific unctions seeFigure 12.2). These systems may be connected in a number of topologies(including ring, hypercube, array, and common bus) and may be used to solvegeneralproblems. n organic distributedsystems, he challenge s to program theschedulerprocessor n such a way as to maximize the utilization of the servingprocessors.

    2E3

    I

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    4/19

    284 Chap. 12 I MultiprocessingSystems

    Figure 12.1 A distributed omputersystem or a military aircraft'

    Figure 12.2 Organic distributedcomputer in common bus configuration'

    12.2.3SystemSPecificationThe specificationof software for distributedsystems s challengingbecause,aswe haveseen, he specificationof software or evena single-processing ystem sdifficult.One technique hat we have discussed, tatecharts,ends tself nicely to thespecificationof distributedsystemsbecause rthogonalprocesses an be assignedto individual processors.f eachprocessors multitasking, theseorthogonalstatescan be furthei subdivided nto orthogonal states epresentinghe individual tasksfor eachProcessor.

    Communicationscomputereaponscomputer

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    5/19

    Sec.12.2 I DistributedSystems

    I EXAMPLE2.2Consider he specification f the avionicssystem br the military aircrafi.We have discussedhefunction of the navigation omputer hruughouthis text.The statechartor this function s given nFigure 5.18.The functions or the weaponscontrol and communicationsystemsare depicted nFigure 12.3andFigure 12.4, espectively.n the nterests f space, nly thispictorial description feach subsystem ill be given. I

    285

    WeaponsyslemRockets Bombs

    Figure 12.3 Weaponscontrol system or a military aircraft

    CommunicationsystemLogon

    Buttonotf Messageinterrupt

    Send Jmessage, \ \\LogD

    t \

    Unscrambleon\ /

    Log otfUnscrambleotf Buttonpressed

    r J aScramble

    L

    TOuput ospeaKer)

    Log onaReceivemessage, \ \

    Logmessage

    Unscrambleon\ /

    Log offUnscramblenff Receivemessage

    Unscramble\

    Figure 12.4 Communicationssystem or military aircraft.

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    6/19

    286 Chap. 2 I MultiprocessingystemsA second echnique hat can be used s the dataflow diagram.Here theprocesssymbolscan representprocessors,whereas he directeoarcs representcommunications aths between he processors. he sinks and sourcescan beeither devicesthat produce and consume data or processeshat produce orconsume aw data.

    I EXAMPLE12.3

    12.2.4Reliabilityn Distributed ystems'The characterizationof reliability in a distributedsystem real-time or otherwise)hasbeenstatedn a well-knownpaper 89], "The ByzantineGenerals'problem.,'The processorsn a distributedsystemcan be considered generals".and heinterconnectionsbetween hem "messengers."The generalsand messengers anbe both loyal (operatingproperly) or traitors (faulty). The task is for rhe generals,who can only communicate via the messengers,o formulate a straiegy forcapturinga city (seeFigure 12.5).Theproblem s to find an algorithm hatailowsthe loyal generals to reach an agreement. It tums out that the problem isunsolvable or a totally asynchronous ystem,but solvable f the generals anvotein'rounds 153]. This provision,however, mposesadditional iming constraintson the system.Furthermore, the problem can be solved only if thi number oftraitors is less han one-third the total numberof processors.We will be using theByzantine generals' problem as an analogy for cooperative multiproceJsingthroughout this chapter.

    12.2.5Galculation t Retiabitityin Distributed ystemsconsider a group of z processorsconnected n any flat topology. It would bedesirable,but costly, to have everyprocessorconnected o every other processorin such away that datacould be sharedbetweenprocessors. his, however, s notusually possible.In any case,we c:ur use a matrix representationo denotetheconnectionsbetweenthe processors.The matrix, R, is constructedas follows: ifprocessor is conflected o processor/ we place a "1" in the ith row,Trh olumn ofR. If they are not connected,a "0" is placedthere.we considerevery processor

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    7/19

    Sec. 12.2 I DistributedSvsrems

    Figure 12.5 The Byzantine generals'problem.

    representmessengers.

    T EXAMPLE2.4A topology in which each of n processors s comected to every other *.ould have an n by nreliability matrix with all ls; that rs,

    287

    GeneralArmy ,

    Messenger Messenger

    MessengerGeneral

    General

    General1

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    8/19

    288 Chap. 12 I MultiprocessingSystems1 1 l1 l l

    R =

    1 1 lI EXAMPLE2.5A topology in which none of the r processors s connected o any other (except tself) would havean n by n reliability matrix with all ls on the diagonalbut 0s elsewhere;hat s,

    p -

    0 0 II EXAMPLE2.6As a more practical example, consider the four processors connected as in Figure 12.6. T\ereliability matrix for this topology would be

    Since processors2 and 3 are disconnected,as are plocessors I and 4, 0s are placed in row 2column 3, row 3 column 2, row I column 4, and row 4 column I in the reliability matrix. I

    Figure 12.6 Four-processordistributedsystem.

    The ideal world has all processorsand interconnectionsuniformly reliable-but this is not alwaysthe.case.We can assigna number between0 and 1 for eachentry to represent ts reliability. For example,an entry of 1 representsa perfeclmessenger r general. f the entriesare ess han 1, thenthis represents traitorou$generalor messenger.A very traitorousgeneralor messenger etsa 0; a "small-time" traitor may get a "0.9" entry.)Disconnectionsstill receivea 0.

    1 0 00 1 0

    l r 1 1 0 \I r o rf t =[ i o r r i\ 0 1 1 1 I

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    9/19

    12.2 I DistributedSystems

    I EXAMPLE 2.7Supposehe distributedystem escribedn Figure12.6actually ad nterconnectionsith thereliabilities arked s n Figure12.'7.Theew eliabilitymarrixwouldbe

    R (I.4.' l0

    .70I.9

    AI0I t) I

    Figure 12.7 Four:-processoristributedsystemwith reliabilities.

    Notice that if we assume hat the communicationsinks have reciprocalreliability(thereliability s the same egardless f which direction he messagestraveling n), then the matrix is symmetricwith respect o the diagonal.This,alongwith theassumptionhat hediagonalelements realways (notnecessarilytrue),can greatlysimplify calculations.

    12.2.6 ncreasing eliabilityn Distributed ystemsIn Figure 12.7 the act that processors and4 do not havedirectcommunicationslinksdoesnot mean hat he wo processorsannot ommunicate. rocessor cansenda messageo processof via processor or 3. It turnsout that the overallreliabilityof the systemmay be increased y using his technique.Without formalization, heoverall eliability of the system anbe calculatedby performinga seriesof specialmatrix multiplications.f R and ! are eliabilitymatricesfor a systemof n processorseach,then we define the compositionofthesematrices,denotedR O S. o be

    n(R O SXr,j) = .V. R(i,lc)S(ft.1)k = lwhere(R o s)(t,7) is theentry in the f throw and7ft colurnnof theresultantmatrixandV reprsentsaking the maximum of the reliabilities.If R = S, then we denoteR O R = R2, called the second-order eliabilitv matrix.

    ( r2 . r )

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    10/19

    294 Chap. 12 I MultiprocessingSystems

    I EXAMPLE 2. 8Considerhesystemn Figure12.7.Computing 2 or thisyields1 .4 .7 .63. 4 t . 9 1. 7 . 9 . 96 3 1 . 9 I

    12.2.6.1Higher-Order Reliability Matrices Higher-order reliabilitiescan be found using the same echniqueas for the secondorder.Recursively,wecan define the ruth rder reliability matrix asR " = R " - l O R (r2.2)

    I EXAMPLE 2. 9The utility of thehigher-ordereliabilitycanbe seenn Figure12.8,whereprocessors and4 aretwo connectionspart.Here, hereliabilitymatrix s

    Figure 12.8 Four-processoristriburcdsystemwith reliabilities.

    l t ' sR =t ; \\ o 0The second-order eliability is

    Calculating the third-order reliability matrix gives

    0 0. 4 01 . 3.3 I

    . 2 0A 1 n

    1 . 3.3 I

    .2 .06. 4 . 1 2t - J.3 I

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    11/19

    12.3 I Non-vonNeumannArchitectures

    The higher reliability matrix allows us to draw an equivalent opology orthe distributed ystem.Oneobviousconclusionhat canbe drawn rom looking atthehigher-ordereliability matrices, ndone hat s intuitivelypleasing,s thatwecan ncreasehe reliabilityof messageassingn distributed ystems y providingredundant econd-,hird-, and so on. Orderpathsbetweenprocessors.T EXAMPLE 2.10For heprevious xample,he hird-orderquivalentopologys given n Figure12.9. T

    sL

    Figure 12.9 Equivalent third-ordertopology or Example 12.9.

    Finally, it canbe shown hat the maximum reliability matrix for n processorsis givenbynR-^- = \ / R ' (r2.3)

    For example, n the previousexample,R-u* = Rl yR2 yR3.To what n do we need to compute to obtain x percent of the theoreticalmaximum reliability? Is this dependenton the topology? s this dependenton thereliabilities? n addition, he reliabilitymatrix might not be fixed; that s, it mightbe some unction of time r. Finally, the fact that transmissionsover higher-orderpaths increase signal transit time introducesa penalty that rnust be balancedagainst he benefit of increased eliability. There are a number of openproblemsin this area that are beyond the scopeof this text.

    NON.VON EUMANN RCFIITECTURESThe processingof discrete signals in real-time is of paramount mportancetovirtually every type of system.Yet the qomputationsneeded o detect, extract,mix, or otherwiseprocesssignalsarecomputationally ntensive.For example, heconvolutionsum discussedn Chapter5 is widely used n signalprocessing.Becauseof thesecomputationallyintensiveoperations, eal-time designersmust look to hardware to improve response times. In response,hardware

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    12/19

    Chap. 12 I MultiprocessingSystems

    designers aveprovided severalnon-von Neumann,multiprocessingarchitectureswhich, thoughnot general urpose, an be used o solvea wide classof problemsin realtime. (Recall that von Neumann architecturesare stored program, singlefetch-execute ycle machines.)These multiprocessors ypically feature largequantitiesof simple processors n VLSI.Increasingly,eal-timesystems redistributed rocessingystems onsistingof one or moregeneral rocessors nd one or more of theseotherstyleprocessors.The general,von Neumann-style rocessors rovide control and input/output,whereas he specializedprocessor s used as an engine for fast executionofcomplexand specialized omputations.n the next sections,we discuss everal fthesenon-von Neumannarchitectures nd llustrate heir applications.

    12.3.1Datafow ArchitecturesDataflowarchitecture.sse a largenumberof specialprocessorsn a topology nwhich each of the processorss connected o every other.In a dataflow architecture,eachof the processors as ts own local memoryand a counter.Special okensarepassed etween he processors synchronously.These okens,calledactivity ackets, ontainan opcode, perand ount ,operands,and list of destinationaddressesor the result of the computation.An exarnpleofa genericactivitypacket s given n F igure 12.10.Eachprocessor'socal memoryis used o hold a ist of activity packets or that processor,he operandsneeded orthe current activity packet, and a counter used to keep track of the number ofoperandsreceived. When the number of operands stored in local memory isequivalent to that required for the operation in the current activity packet, theoperation s performed and the results are sent o the specified destinations.Oncean activity packethas been executed, he processorbegins working on the nextactivity packet in its execution ist.

    Opcode I n (number f arguments)ArgumentArgument

    ArgumentDestinalion1Destination2

    Destinationm Figure 12.10 Generic activity templatefor dataflow machine.

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    13/19

    12.3 I Non-vonNeumannArchitectures 293

    T EXAMPLE2.11We can use he dataflowarchitectureo perform he discrete onvolutionof two signalsasdescribedin theexercisesor Chapter5. That s, he discrete onr.,olutionftwo real-valuedunctions/(r)ands(r, = 0,1,2,3,4.

    4ff* s)(4= ,l /(i )s(r i)The processor opologyand activity packet ist is clescribedn Figure 12 ll I

    rJrngtaqAotivity { u l t 2f lu l t I2 f u l t 2l-r(r)I s(1)t(2)

    Mul t 2 l u l t2I-nr) (2) s(2)(2) s(0)0tProcessor

    ternPl

    AcilvitY

    ProcessorFigure 12.11 Discreteconvolution n a dataflowarchitecture.

    Dataflow architectures are an excellent parallel solution for signaiprocessing. he'only drawback or dataflow architecturess that cuffentlv ihe\cannot be implemented n VLSI. Performancestudies or dataflo'x real-tr::isystems an be found in [148].

    L2.3:.1.1 ystemSpecification or Dataflow Processors Datai-lcr'r'-i-r-tecturesare deal'because hey are dilect implementationsof dataflorr g:i:.-s i:.fact, programm'ers raw dataflow diagramsas part of the programmng ;,:r\-3s>.The graphsare hentranslatednto a list of activity packels or eachctLr--eStlr.t

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    14/19

    Chap. 2 I MultiprocessingSystems

    Figure 12.12 Specification f discrete onvolutionusing dataflowdiagrams

    exampleof is given in Figure 12.12.As we have seen n the example, hey arewell-adapted o parallel signalprocessing52], [53].12.3.2SystolicProcessors

    Systolic rocessors onsistof a largenumberof uniform processors onnectednan array topology. Each processor usually performs only one specializedoperationandhasonly enough ocal memory o perform ts designated peration,and to store he inputs andoutputs.The individualprocessors, alledprocessingelements, ake inputs from the top and left, perform a specified operation, andoutput heresults o theright andbottom.Onesuchprocessing lements depictedin Figure 12.13.The processors le connected o the four nearestneighboringprocessorsn the nearest eighbor opology depicted n Figure 12.14.Processingor firing at each of the cells occurs simultaneouslyn synchronizationwith acentralclock. The fact that eachcell fires on this heartbeatends he namesystolic.Inputs to the systemare rom memory storesor input devicesat the boundarycells

    z = c ' y + x

    Figure 12.13 Systolicprocessor lement.

    294

    0, 0,g(0),s(1),(2) 0, 0, 0,s(o),(1)

    w = l

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    15/19

    12.3 I Non-vonNeumannArchitectures 29s

    Figure 12.14 Systolic array in nearestneighbor topology'

    at the left and top. outputs to memory oI output devices ale obtained fromboundary cells at the right and bottom.

    T EXAMPLE2.12Onceagainconsider he discrete onvolutionof two real-valuedunctions/(l)andg(l), t = 0,1,2,3,4'A systolic array such as the one in Figure 12.15 canbe constructed o perform the convolutibn' AIgeneralalgorithm can be found in [5.2]'

    Systolic Processorssornewhat troublesome,connectionbusesand inare fast and can be implemented in VLSI' They arehowever, in dealing with propagation delays in thethe availability of inputs when the clock ticks'

    sg le(3)s(z',)o ( 1 ); (0)

    Inputss$)s(3)s(?',)o ( 1 )b ol

    s (41s (3)s (21s (1)s (0)

    0 0 0 0 0Input tream

    Outputs

    Figure 12.15 SystoTieagayfor convolution'

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    16/19

    Chap. 12 I MultiprocessingSystems296

    FO

    o. Fo\o(\!0?=

    L-_

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    17/19

    Sec. 12.3 I Non-von NeumannArchitectures

    12.3.2.1Specificationof Systolic Systems The similarity of the jargonassociated ith systolicprocessorseadsus to believe hatPetrinetscanbe usedto specifysuchsystems.This is indeed rue, and an exampleof specifying heconvolutionoperation s given in Figure 12.16.12.3.3WavefrontProcessors

    WaveJront rocessorsconsist of an array of identical processors, ach with itsown local memory and connected n a nearest neighbor topology. Eachprocessor sually perforrnsonly one specialized peration.Hybrids containingtwo or more different type cells are possible.The cells fire asynchronouslywhen all required nputsfrom the left and top arepresent.Outputs hen appearto the right and below. Unlike the systolic processor, he outputs are theunaltered nputs. That is, the top input is transmitted,unaltered, o the bottomoutput bus, and the left input is transmitted,unaltered, o the right output bus,Also different rom the systolicprocessor, utputs rom the wavefrontprocessorare readdirectly from the local memoryof selected ellsandnot obtained romboundary cells. Inputs are still placed on the top and left input buses ofboundary cells. The fact that inputs propagate hrough the array unaltered ikea wavegives his architecturets name.Figure 12.17depicts typicalwavefront

    Figure 12.17 Wavefront processorelement

    processingelement.Wavefront processorsare very good for computatronally-intensive real-time systems and are used widely in modern real-time signal.processing51], [52]. In addition, a wavefront archirecturecan cope with timinguncertaintiessuch as local bfocking, random delay in semmrrnisations,andfluctuations in computing times 86].

    Chap. 12 I MultiprocessingSystems

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    18/19

    298

    I EXAMPLE2.13Onceagainconsiderhe discrete onvolutionof two real-valuedunctionsflr)andg(t)' t = O'l'2'3'4.A wavefront array such as the one in Figure 12.18can be constructed o perform the convolution.tfter five firings- the convoluiion proaluctswill be founcl n the innennost PEs. I

    Figure 12.18 Discreteconvolution using a wavefront array'

    Wavefront processorscombine the best of systolic architectureswithdataflowarchitectures. hat is, they supportan asynchronous ataflowcomputingstructure-timing in the interconnectionbusesand at input and outputdevices snot a problem. Furthermore, he structurecan be implemented n VLSI.

    L2.3.3.1System Specification for Wavefront Processors As is true ofthe dataflowarchitecture, ataflowdiagramscanbe used o specify hesesystems.For example, the convolution systemdepicted n the previous example can hspecifiedusingFigure 12.12'Finally, Petri netsand finite stateautomataor a variation,cellular automata-may have potential use for specifying wavefront systems'

    12.3.4TransPutersTransputers are fully self-sufficient, multiple instruction set, von Neumannp.o""rrorr. The instructionset ncludesdirectives o senddata or receivedataviaports that are connected o other transputers.The transputers,hough capableofacting as a uniprocessor, re bestutilized when connected n a nearestneighborconfiguration. In a sense, the transputer provides a wavefront or systolicprocessing apabilitybut without the restrictionof a single nstruction. ndeed,b1providing each transputer in a network with an appropriate stream of an*lsynchronizationsignals, wavefront or systolic computers-which can changecorifigurations-can be implemented.Transputers avebeenwidely used n ernbeddedeal-timeapplications,a-uccornmercial mplementationsare readily available.Moreover, tool support,suctas he multitasking anguageoccam-2,hasmade t easier o build transputer-ba-sapplications.

    o o 0 (o) (1) (2) !(9) (4)o o t(o) t(t) t(21 (3) (1) 9o rlol rir f(2) !(3) (1) q grt-ol tiri t(2i 1(3) (1) o o oiiii tizi tigi r(4i o o o o0000(0)

    s(4), ,s(0)

    _4

    Sec. I Exercises 299

  • 8/9/2019 Real Time System 09 Philip A Lapalante 2nd Edition

    19/19

    12.4EXERCISES

    6.

    8.9.

    10.

    f t = (

    I r o . zf t= t 0 . , I\ 0 . 7 0

    1. For the following reliabiliry matrix draw the associateddistributed system graph andcomputeR2.

    2. For the following reliability matrix draw the associated distributed system gaphcomputeR2.

    1 1 1 \1 r 0 ll 0 t l

    0J \0 ll l

    l r o 0 . 6 0l 0 1 0 0 . 8R = [ 0 . 6 0 1 l\ o 0 . 8 I

    3. For the following reliability matrix computeR2, R3, andR^ * (Hint" R*u* + R3).

    4. Show that the O operation is not commutative. For example, if R and ,Sare 3 X 3reliability matrices,then in general,

    R O S * S O RIn fact, you should be able to show that for any n x n reliability matrix

    R o S = ( s o R ) rwhere0r representshematrix ranspose.

    5. Designa dataflowarchitectureor performing hematrix multiplication f two 5 by 5arrays. ssumehat binaryADD andMULT arepartof the nstnrction et.Design a dataflow architecture or performing the matrix addition of two 5 by 5 arrays.Assumethat binary ADD is part of the instruction set.Use data.flowdiagrarns o describe he systems n exercises4 and 5.Designa systolic arrayfor performing the matrik multiplication of two 5 by 5 arrays.Usethe processingelement described n Figure 12'13.Design a systolic arrayfor performing the matrix addition of two 5 by 5 arrays.Use theprocessingelementdescribed n Frgure 12.13.Use Petri nets and the processing element described n Figure 12-13 to describe thesystolic array to perform the functions described in(a) Exercise7(b) Exercise 8

    11. Design a wavefront array for performing the manix multiplication of two 5 by 5 anays'Use the processing element described n Figure 12.17.12. Design a wavefront array for performing the maEix addition of two 5 by 5 arrays.Use

    the processing element described n Figure 12.17.13. Use dataflow diagramsto describe he syltems in(a ) Exercise10(b ) Exercise1l14. Use Petri nets to specify the wavefront irray system shown in Figure 12.18.