dr.basem alkazemi [email protected]

34
Embedded Design Pattern Dr.Basem Alkazemi [email protected] http:// uqu.edu.sa/bykazemi

Post on 19-Dec-2015

231 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Embedded Design Pattern

Dr.Basem Alkazemi. .bykazemi@uqu edu sa

://http . . /uqu edu sa bykazemi

Page 2: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Embedded SystemIs a self-contained application that can provide its own functionality without major interdependencies on other parts of the overall system that it is incorporated in.

Page 3: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Design MetricsNonrecurring engineering cost (NRE)Unit costSize – bytes/gates-transPerformancePowerFlexibilityTime-to-prototypeTime-to-marketMaintainabilityCorrectnessSafety

Page 4: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Design PatternsIs an abstract representation of best practices for resolving commonly known problems in an application domain. It is advantageous to build a common language between different developers and also to reduce development time and cost.

Page 5: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Design PatternsSynchronizerHigh Speed Serial PortHardware DeviceResource AllocationFeature Coordination

Page 6: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design PatternMotivation:To obtain synchronization between two components, this pattern provides mechanisms for:

Achieving initial synchronization (sync) Once sync is achieved, confirming the presence

of the sync framing Initiating loss of sync procedures

Page 7: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design PatternStructureThe following high level states are defined for the state machine:

Establishment of synchLose synch

Page 8: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design PatternEstablishment of synch

1. System starts up in "Searching For Sync" state. In this state, the incoming data stream is being analyzed bit by bit, looking for the occurrence of the sync pattern.

2. As a soon as a first sync pattern is detected, the system transitions to the "Confirming Sync Pattern" state.

3. Now the system checks if the sync pattern is repeating as expected. This check is made according to the specified periodicity.

4. If the sync pattern is repeating, the system transitions to the "In Sync" state. (If the sync pattern was not found, the system would have transitioned back to the "Searching For Sync" state)

5. At this point, the system is considered to be synchronized. 

Page 9: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design Pattern Lose Synch:

1. When the system is synchronized, it is in the "In Sync" state. In this state, the system is constantly monitoring the periodic occurrence of the sync pattern.

2. If an expected sync pattern is found to be missing, the system transitions to the "Confirming Sync Loss". The system is still in a synchronized state. The main purpose of the "Confirming Sync Loss" state is to check if the loss of sync was an isolated event or it represents complete loss of sync.

3. In the "Confirming Sync Loss" state, the system looks for the sync pattern at the expected time interval. If the sync pattern is seen again, the system transitions back to the "In Sync" state.

4. In this case, the sync pattern is not detected for a preconfigured number of times.

5. The system is now in sync loss state. The system is transitioned to the "Searching For Sync" state.

Page 10: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design Pattern

Page 11: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design Pattern• Sub-Classes:

Searching_For_Sync : detecting the sync pattern in the coming stream of bits

Confirming_Sync_Pattern: after detection wait till detecting another pattern within a specified period

In_Sync: check if sync is still aliveConfirming_Sync_Loss: if sync bit is lost or

want to return to the first state

Page 12: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Synchronizer Design Pattern

Page 13: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

High Speed Serial PortMotivationThe main motivation is to minimize dependency on hardware due to the frequent changes in interface device that may involves a costly configuration exercise. This design pattern encapsulates DMA configuration, register interfacing and interrupt handling specific to a device. Change in the device will just result in changes to the set of classes involved in implementing this design pattern without affecting consumer classes.

Page 14: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

High Speed Serial PortStructureSerial Port pattern is implemented with the SerialPort and SerialPortManager classes. The SerialPortManager maintains an array of SerialPort objects. Each SerialPort object manages the transmit and receive buffers. The SerialPortManager class also implements the interrupt service routine.

Page 15: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

High Speed Serial PortSerial Port Manager: Manages all the

Serial Ports on the board.Serial Port: Handles the interface with a

single serial port device. It contains the transmit and receive buffers.

Transmit Queue: This queue contains messages awaiting transmission on the serial port.

Receive Queue: Messages received on the serial link are stored in this queue. 

Page 16: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

High Speed Serial Port

Page 17: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

High Speed Serial PortTransmitting a Message Initialize SerialPortManager's constructor for the InterruptServiceRoutine() and Serial Port's

constructor to initialize TX & RX to initial states (TX = empty, RX = Ready). invoke the HandleTxMessage() method to enqueue message by SerialPort The method enqueues the message in the Transmit Queue and checks if this is the first message

in the queue. Since this is the first message in the queue, the message is removed from the queue and copied

into a transmission buffer and the "ready for transmission" flag is set. The flag is set, so the TX device begins transmission of the buffer. When all bytes of the message have been transmitted, the device set the "finished transmission"

bit in the buffer header. The device checks the next buffer to determine if it is ready for transmission. In this scenario, no other buffer is ready for transmission. Device raises the transmission

complete interrupt. (If more messages were enqueued, the device would have automatically started transmitting the buffer).

The InterruptServiceRoutine() is invoked. The ISR invokes HandleInterrupt() method of the SerialPort to select the interrupting device. SerialPort checks the interrupt status register to determine the source of the interrupt. This is a transmit interrupt, so the HandeTxInterrupt() method is invoked. A transmission complete event is sent to the task. This event is routed by the SerialPortManager to the SerialPort. SerialPort checks if the transmit queue has any more messages. If a message is found, message transmission of the new message is initiated.

Page 18: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

High Speed Serial PortReceiving a Message When the device detects the start of a new message, it accesses the

receive_buffers and checks the "free buffer" bit in the buffer header. The RX device finds a free buffer, so it starts DMA operations to

copy all the received bytes into the designated buffer. The device raises an interrupt when message reception is

completed. It also sets the "received message" bit in the buffer header. (If another message reception starts, the device will automatically start receiving that message in the next buffer)

At this point a message receive_complete event is dispatched to task list for sender acknowledgement.

The Serial Port's event handler allocates memory for the received message and writes the new message into the receive queue.

Then it cleans up the receive buffer by setting the "free buffer" bit in the buffer header. 

Page 19: Dr.Basem Alkazemi bykazemi@uqu.edu.sa
Page 20: Dr.Basem Alkazemi bykazemi@uqu.edu.sa
Page 21: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Hardware Device Design PatternMotivationVery often the lowest level of code that interfaces with the hardware is difficult to understand and maintain. One of the main reasons for this is the behavior of register level programming model of hardware devices. Very often devices require registers to be accessed in a certain sequence. Defining a class to represent the device can go a long way in simplifying the code by decoupling the low level code and register manipulation. Also facilitates porting of the code to a different hardware platform.

Page 22: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Hardware Device Design PatternStructureThe structure of class in this design pattern largely depends upon the register programming model of the device being programmed. In most cases, this design pattern would be implemented as a single class representing the device. In case of complex devices, the device might be modeled as a main device class and other subclasses modeling different parts of the device.

Page 23: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Hardware Device Design PatternSample Implementation:• Status Register (STAT): This read only register contains the following status

bits: • Bit 0: Transmit Buffer Has Empty Space • Bit 1: Receive Buffer Has Data • Bit 2: Transmit under run • Bit 3: Receive overrun

• Action Register (ACT): Bits in this write only register correspond to the bits in the status register. A condition in the status register can be cleared by writing the corresponding bit as 1. Note that bit 0 automatically gets cleared when writes are performed to the transmit buffer. Bit 1 is cleared automatically when reads are performed from the receive buffer. Bit 2 and 3 however need to be cleared explicitly.

• Transmit Buffer (TXBUF): Write only buffer in which bytes meant for transmission should be written.

• Receive Buffer (RXBUF): Read only buffer in which received bytes are stored.

Page 24: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Resource Allocation PatternsResource Allocation Algorithms

Hottest First Coldest First Load Balancing Future Resource Booking

Page 25: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Resource Allocation PatternsHottest First In hottest first resource allocation, the resource last released is

allocated on next resource request. To implement this last in first out, LIFO type of allocation, the list of free resources is maintained as a stack. An allocation request is serviced by popping a free resource from the stack. When a resource is freed, it is pushed on the free resource list.

The disadvantage of this scheme is that there will be uneven utilization of resources. The resources at the top of the stack will be used all the time. If the resource allocation leads to wear and tear, the frequently allocated resources will experience a lot of wear and tear. This scheme would be primarily used in scenarios where allocating a resource involves considerable setup before use. With this technique, under light load only a few resources would be getting used, so other resources would be powered down or operated in low power mode.

Page 26: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Resource Allocation PatternsColdest FirstIn coldest first resource allocation, the resource not

allocated for maximum time is allocated first. To implement this first in first out, FIFO type of allocation , the resource allocating entity keeps the free resources in a queue. A resource allocation request is serviced by removing a resource from the head of the queue. A freed resource is returned to the free list by adding it to the tail of the queue.

The main advantage of this scheme is that there is even utilization of resources. Also, freed resource does not get reused for quite a while, so inconsistencies in resource management can be easily resolved via audits.

Page 27: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Resource Allocation PatternsLoad BalancingIn situations involving multiple resource

groups, load balancing is used. A resource group is controlled by a local resource controller. In this technique, the resource allocator first determines the lightly loaded resource group. Then, the resource controller of the lightly loaded resource group performs the actual resource allocation. The main objective of resource allocations is to distribute the load evenly amongst resource controllers.

Page 28: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Resource Allocation PatternsFuture Resource BookingHere, each resource allocation is for a specified time. The

resource allocation is only valid till the specified time is reached. When the specified time is reached, the resource is considered to be free. Thus the resource does not need to be freed explicitly.

This technique is used in scenarios where a particular resource needs to be allocated for short duration to multiple entities in the future. When an allocation request is received, the booking status of the resource is searched to find the earliest time in future when the resource request can be serviced. Resource booking tables are updated with the start and end time of each resource allocation.

Page 29: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Feature Coordination PatternsFeature design involves defining the sequence of messages that will

be exchanged between tasks. When designing a feature one of the tasks in the feature should be identified as the Feature Coordinator. The main role of the Feature Coordinator is to ensure that the feature goes to a logical completion. No feature should be left in suspended animation because of message loss or failure of a single task involved in the message interactions.

In most cases, the task coordinating the feature will be running a timer to keep track of progress of the feature. If the timer times out, the coordinator will take appropriate recovery action to take the feature execution to a logical conclusion, i.e. feature success or failure.

Feature coordination can be achieved in several ways. Some of the frequently seen design patterns are described here. The description is in terms of four tasks A, B, C and D that are involved in a feature. A is the feature coordinator in all cases.

Page 30: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Feature Coordination Patterns Cascading Coordination

Here, on receipt of the feature initiation trigger, A handles the message and further sends a message trigger to B. As a part of the feature, B sends a message to C. Again, C does some action and further sends a message to D. D replies back to C, C replies back to B and B further replies back to A. Finally, A indicates about the feature completion. Most of the times, tasks A, B and C will be keeping a timer to monitor the message interaction. It can be seen that there is cascade of sub-feature control at tasks C, B and A. The main advantage of this scheme is that if any involved task misbehaves, appropriate recovery action can be taken at points C, B or A, thus isolating the failure condition. This design however is more complicated to implement because B and C have to share the coordination role.

Page 31: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Feature Coordination Patterns Loose Coordination

Here, on receipt of the feature initiation trigger, A handles the message and sends a message to B. B further sends a message to C and C in turn sends a message to D as part of the feature. D takes appropriate action and replies to A. Here, the feature coordinator task A would be running a timeout. The main advantage of this type of coordination is that it involves fewer message exchanges. The message handling at B and C would be fairly straightforward. However, it has a disadvantage that if some involved task misbehaves, only A would timeout and would know about the failure. But A has no means of isolating it.

Page 32: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Feature Coordination Patterns Serial Coordination

Here, the feature is initiated by A by sending a message to B. B completes its job and replies back to A. A registers the completion of first phase of the feature and initiates the second phase by sending a message to C. C takes some action and replies back to A. A registers the completion of the second phase of the feature and initiates the next phase by sending a message to D. D then performs its job and replies back to A. Here, A keeps a timer for each phase of the feature. This scheme allows the feature coordinator task, A to know about the progress of the feature at all times. Thus the advantage is that A can take intelligent recovery action if a failure condition hits at some point. The main disadvantage is additional complexity at A.

Page 33: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

Feature Coordination Patterns Parallel Coordination

Here, on receipt of the feature initiation trigger, A sends message triggers to B, C and D tasks. B, C and D perform their jobs and reply back to A. In this case A may keep one timer for all the message interactions or it may keep different timers. The main difference of this scheme from the serial coordination scheme is that there is no dependence of the different phases of the feature on each other, so they can be initiated at the same time. Like in case of serial coordination, in this scheme also, intelligent recovery action can be taken if a failure condition is hit because A knows about the feature progress at all times. In parallel coordination, the delay in feature execution is minimized due to parallel activation of sub-features. But parallel activation places a higher resource requirement on the system, as multiple message buffers are acquired at the same time.

Page 34: Dr.Basem Alkazemi bykazemi@uqu.edu.sa

SummaryDesign patterns offers the following benefits:

• Provides a common framework for exchanging ideas. • Reduce time to market as designers can re-use ready

made design patterns, without wasting time for re-inventing the wheels.

• Quality assurance as design patterns usually tested thoroughly.