iaetsd decentralized coordinated cooperative cache

10
Decentralized Coordinated Cooperative Cache Replacement Algorithms For Social Wireless Networks Surabattina sunanda 1 , abdul rahaman shaik 2 1 Computer science and engineering. 2 Computer science and engineering 1 [email protected], 2 [email protected] Cooperative caching is a technique used in wireless networks to improve the efficiency of information access by reducing the access latency and bandwidth usage.in SWNETs are formed by mobile devices, such as data enabled phones, electronic book readers etc., sharing common interests in electronic content, and physically gathering together in public places. Electronic object caching in such SWNETs are shown to be able to reduce the content provisioning cost which depends heavily on the service and pricing dependences among various stakeholders including content providers (CP), network service providers, and End Consumers (EC). We present a very low- overhead decentralized algorithm for cooperative caching that provides performance comparable to that of existing centralized algorithms. Unlike existing algorithms that rely on centralized control of cache functions, our algorithm uses hints (i.e. inexact information) to allow clients to perform these functions in a decentralized fashion. This paper shows that a hint-based system performs as well as a more tightly coordinated system while requiring less overhead. Keywords:Data Caching, Cache Replacement,SWNETs,Cooperative caching, content provisioning, ad hoc networks 1.INTRODUCTION: Caching is a common technique for improving the performance of distributed file systems[Howard88, Nelson93, Sandberg85]. Client caches filter application I/O requests to avoid network and server traffic, while server caches filter client cache misses to reduce disk accesses. A drawback of this organization is that the server cache must be large enough to filter most client cache misses, otherwise costly disk accesses will dominate system performance.A solution is to add another level to the storage hierarchy, one that allows a client to access blocks cached by other clients. This technique is known as cooperative caching and it reduces the load on the server by allowing some local client cache misses to be handled by other clients. The cooperative cache differs from the other levels of the storage hierarchy in that it is distributed across the clients and it therefore shares the same physical memory as the local caches of the clients. A local client cache is controlled by the client, and a server cache is controlled by the server, but it is not clear who should control the cooperative cache. For the cooperative cache to be effective, the clients must somehow coordinate their actions. INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 1

Upload: iaetsd-iaetsd

Post on 16-Jul-2015

22 views

Category:

Engineering


2 download

TRANSCRIPT

Page 1: Iaetsd decentralized coordinated cooperative cache

Decentralized Coordinated Cooperative Cache Replacement Algorithms For Social Wireless

Networks

Surabattina sunanda1, abdul rahaman shaik2 1Computer science and engineering. 2 Computer science and engineering

1 [email protected], 2 [email protected] Cooperative caching is a technique

used in wireless networks to improve the efficiency of information access by reducing the access latency and bandwidth usage.in SWNETs are formed by mobile devices, such as data enabled phones, electronic book readers etc., sharing common interests in electronic content, and physically gathering together in public places. Electronic object caching in such SWNETs are shown to be able to reduce the content provisioning cost which depends heavily on the service and pricing dependences among various stakeholders including content providers (CP), network service providers, and End Consumers (EC). We present a very low-overhead decentralized algorithm for cooperative caching that provides performance comparable to that of existing centralized algorithms. Unlike existing algorithms that rely on centralized control of cache functions, our algorithm uses hints (i.e. inexact information) to allow clients to perform these functions in a decentralized fashion. This paper shows that a hint-based system performs as well as a more tightly coordinated system while requiring less overhead.

Keywords:Data Caching, Cache Replacement,SWNETs,Cooperative caching, content provisioning, ad hoc networks

1.INTRODUCTION: Caching is a common technique for

improving the performance of distributed file systems[Howard88, Nelson93, Sandberg85]. Client caches filter application I/O requests to avoid network and server traffic, while server caches filter client cache misses to reduce disk accesses. A drawback of this organization is that the server cache must be large enough to filter most client cache misses, otherwise costly disk accesses will dominate system performance.A solution is to add another level to the storage hierarchy, one that allows a client to access blocks cached by other clients. This technique is known as cooperative caching and it reduces the load on the server by allowing some local client cache misses to be handled by other clients.

The cooperative cache differs from the other levels of the storage hierarchy in that it is distributed across the clients and it therefore shares the same physical memory as the local caches of the clients. A local client cache is controlled by the client, and a server cache is controlled by the server, but it is not clear who should control the cooperative cache. For the cooperative cache to be effective, the clients must somehow coordinate their actions.

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 1

Page 2: Iaetsd decentralized coordinated cooperative cache

Wireless mobile communication is a fastest growing segment in communication industry[1]. It has currently supplemented or replaced the existing wired networks in many places. The wide range of applications and new technologies[5] simulated this enormous growth. The new wireless traffic will support heterogeneous traffic, consisting of voice, video and data. Wireless networking environments can be classified in to two different types of architectures, infrastructure based and ad hoc based. The former type is most commonly deployed one, as it is used in wireless LANS and global wireless networks. An infrastructure based wireless network uses fixed network access points with which mobile terminals interact for communication and this requires the mobile terminal to be in the communication range of a base station. The ad hoc based network structure alleviates this problem by enabling mobile terminals to cooperatively form a dynamic network without any pre existing infrastructure. It is much convenient for accessing information available in local area and possibly reaching a WLAN base station,which comes at no cost for users. Mobile terminals available today have powerful hard ware, but the capacity of the batteries goes up slowly and all these powerful components reduce battery life. Therefore adequate measures should be taken to save energy. Communication is one of the major sources of energy onsumption. By reducing the data traffic energy can be conserved for longer time. Data caching has been introduced[10] as a techniques to reduce the data traffic and access latency. By caching data the data request can be served from the mobile clients without sending it to the data source each time. It is a major technique used in the web to reduce the access latency. In web, caching is

implemented at various points in the network. At the top level web server uses caching, and then comes the proxy server cache and finally client uses a cache in the browser.

The major contribution of this paper is to show that a cooperative caching system that relies on local hints to manage the cooperative cache performs as well as a more tightly-coordinated fact-based system. Our motivation is simple: hints are less expensive to maintain than facts and as long as hints are highly accurate, they will improve system performance. Hence, instead of maintaining and accessing global state, each client in a hint-based system gathers its own information about the cooperative cache. Thus the key challenge in designing a hint-based system is to ensure that the hints are highly accurate with minimal overhead.

In this paper, we describe a cooperative caching system that uses hints instead of facts whenever possible. 2.RELATED WORK:

There is a rich body of the existing literature on several aspects of cooperative caching including object replacements, reducing cooperation overhead, cooperation performance in traditional wired networks. The Social Wireless Networks explored in this paper, which are often formed using mobile ad hoc network protocols, are different in the caching context due to their additional constraints such as topological insatiability and limited resources. As a result, most of the available cooperative caching solutions for traditional static networks are not directly applicable for the SWNETs. Three caching schemes for MANET have been presented[9][11]. in. In the first scheme, CacheData, a forwarding node checks the passing-by

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 2

Page 3: Iaetsd decentralized coordinated cooperative cache

objects and caches the ones deemed useful according to some predefined criteria. This way, the subsequent requests for the cached objects can be satisfied by an intermediate node. A problem with this approach is that storing large number of popular objects in large number of intermediate nodes does not scale well.The second approach, CachePath, is different in that the intermediate nodes do not save the objects; instead they only record paths to the closest node where the objects can be found. The idea in CachePath is to reduce latency and overhead of cache resolution by finding the location of objects. This strategy works poorly in a highly mobile environment since most of the recorded paths become obsolete very soon. The last approach in is the HybridCache in which either CacheData or CachePath is used based on the properties of the passing-by objects through an intermediate node. While all three mechanisms offer a reasonable solution, and that relying only on the nodes in an object’s path is not most efficient. Using a limited broadcast-based cache resolution can significantly improve the overall hit rate and the effective capacity overhead of cooperative caching. According to the protocols the mobile hosts share their cache contents in order to reduce both the number of server requests and the number of access misses. The concept is extended in for tightly coupled groups with similar mobility and data access patterns. This extended version adopts an intelligent bloom filter-based peer cache signature to minimize the number of flooded message during cache resolution. A notable limitation of this approach is that it relies on a centralized mobile support center to discover nodes with common mobility pattern and similar data access patterns. Our work, on the contrary, is fully distributed in which the mobile devices cooperate in a peer-to-peer fashion for minimizing the object access cost. In summary, in most of

the existing work on collaborative caching, there is a focus on maximizing the cache hit rate of objects, without considering its effects on the overall cost which depends heavily on the content service and pricing models. This paper formulated two object replacement mechanisms to minimize the provisioning cost, instead of just maximizing the hit rate. Also, the validation of our protocol on a real SWNET interaction trace with dynamic partitions, and on a multiphone Android prototype is unique compared to the existing literature. From a user selfishness standpoint, Laoutaris et al.investigate its impacts and mistreatment on caching. A mistreated node is a cooperative node that experiences an increase in its access cost due to the selfish behavior by other nodes in the network. In, Chun et al study selfishness in a distributed content replication strategy in which each user tries to minimize its individual access cost by replicating a subset of objects locally (up to the storage capacity), and accessing the rest from the nearest possible location. Using a game theoretic formulation, the authors prove the existence of a pure Nash equilibrium under which network reaches a stable situation. Similar approach has been used in which the authors model a distributed caching as a market sharing game. 3.Network Model,Cache Replacement Policies: 3.1 NETORK MODEL Fig. 1 illustrates an example SWNET within a University campus. End Consumers carrying mobile devices form SWNET partitions, which can be either multi-hop (i.e.,MANET) as shown for partitions 1, 3, and 4, or single hop access point based as shown for partition 2. A mobile device can download an object (i.e., content) from the

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 3

Page 4: Iaetsd decentralized coordinated cooperative cache

CP’s server using the CSP’s cellular network, or from its local SWNET partition. We consider two types of SWNETs. The first one involves stationary SWNET partitions. Meaning, after a partition is formed, it is maintained for sufficiently long so that the cooperative object caches can be formed and reach steady states. We also investigate a second type to explore as to what happens when the stationary assumption is relaxed.To investigate this effect, caching is applied to SWNETs formed using human interaction traces obtained from a set of real SWNET nodes.

Fig. 1. Content access from an SWNET in a University Campus. 3.2. Cache Replacement Caching in wireless environment has unique constraints like scarce bandwidth, limited power supply, high mobility and limited cache space. Due to the space limitation, the mobile nodes can store only a subset of the frequently accessed data. The availability of the data in local cache can significantly improve the performance since it overcomes the constraints in wireless environment. A good replacement mechanism is needed to distinguish the items to be kept in cache and that is to be removed when the cache is full. While it would be possible to pick a random object to replace when cache is full, system performance will be better if we choose an object that is not heavily used. If a heavily used data item is removed it will probably

have to be brought back quickly, resulting in extra overhead. So a good replacement policy is essential to achieve high hit rates. The extensive research on caching for wired networks can be adapted for the wireless environment with modifications to account for mobile terminal limitations and the dynamics of the wireless channel. 3.3. Cache Replacement Policies in Ad hoc networks Data caching in MANET is proposed as cooperative caching. In cooperative caching the local cache in each node is shared among the adjacent nodes and they form a large unified cache. So in a cooperative caching environment, the mobile hosts can obtain data items not only from local cache but also from the cache of their neighboring nodes. This aims at maximizing the amount of data that can be served from the cache so that the server delays can reduced which in turn decreases the response time for the client. In many applications of MANET like automated highways and factories, smart homes and appliances, smart class rooms, mobile nodes share common interest. So sharing cache contents between mobile nodes offers significant benefits. Cache replacement algorithm greatly improves the effectiveness of the cache by selecting suitable subset of data for caching. The available cache replacement mechanisms for ad hoc network can be categorized in to coordinated and uncoordinated depending on how replacement decision is made. In uncoordinated scheme the replacement decision is made by individual nodes. In order to cache the incoming data when the cache is full, replacement algorithm chooses the data items to be removed by making use of the local parameters in each node.

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 4

Page 5: Iaetsd decentralized coordinated cooperative cache

3.4.Split Cache Replacement

Fig.2. Cache partitioning in split cache policy. To realize the optimal object placement under homogeneousobject request model we propose the following Split Cache policy in which the available cache space in each device is divided into a duplicate segment (λ fraction) and a unique segment (see Fig. 2). In the first segment, nodes can store the most popular objects without worrying about the object duplication and in the second segment only unique objects are allowed to be stored. The parameter λ in Fig. 2 (0 ≤ λ ≤1) indicates the fraction of cache that is used for storing duplicated objects. With the Split Cache replacement policy, soon after an object is downloaded from the CP’s server, it is categorized as a unique object as there is only one copy of this object in the network. Also, when a node downloads an object from another SWNET node, that object is categorized as a duplicated object as there are now at least two copies of that object in the network. For storing a new unique object, the least popular object in the whole cache is selected as a candidate and it is replaced with the new object if it is less popular than the new incoming object. For a duplicated object, however, the evictee candidate is selected only from the first duplicate segmentof the cache. In other words, a unique object is never evicted in order to accommodate a duplicated object.

4 A Hint-based Algorithm

The previous cooperative caching algorithms rely in part on exact information, or facts to manage the cache. Although facts allow these algorithms to make optimal decisions, they increase the latency of block accesses and the load on the managers. Our goal in designing a cooperative caching algorithm is to remove the reliance on centralized control of the cooperative cache. Clients should be able to access and replace blocks in the cooperative cache without involving a manager.

Reducing the dependence of clients on managers is achieved through the use of hints information that only approximates the global state of the system. The decisions made by a hint-based system may not be optimal, but managing hints is less expensive than managing facts. Hints do not need to be consistent throughout the system, eliminating the need for centralized coordination of the information flow. As long as the overhead eliminated by not using facts more than offsets the effect of making mistakes, the gamble of using hints will pay off.

The remainder of this section describes the components of a hint-based algorithm. Section 4.1 describes the hint-based block lookup algorithm. Section 4.2 describes how the replacement policy decides whether or not to forward a block to the cooperative cache. Section 4.3 discusses how the replacement policy chooses the target client for forwarding a block. Section 4.4 discusses the use of the server cache to mask replacement mistakes. Finally, Section 4.5 describes the effect of the cache consistency protocol on the use of hints.

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 5

Page 6: Iaetsd decentralized coordinated cooperative cache

4.1 Block Lookup

When a client suffers a local cache miss a lookup must be performed on the cooperative cache to determine if and where the block is cached. The manager performs this lookup in the previous algorithms, both increasing the block access time and incurring load on the manager and network.

An alternative approach is to let the client itself perform the lookup, using its own hints about the locations of blocks within the cooperative cache. These hints allow the client to access the cooperative cache directly, avoiding the need to contact the manager on every local cache miss.

Based on the above, we can identify two principal functions for a hint-based lookup algorithm:

Hint Maintenance: The hints must be maintained so that they are reasonably accurate, otherwise the overhead of looking for blocks using incorrect hints will be prohibitive.

Lookup Mechanism: Hints are used to locate a block in the cooperative cache, but the system must be able to eventually locate a copy of the block should the hints prove wrong.

Each of these functions is discussed in detail in the following sections.

4.1.1 Hint Maintenance

To make sure that hints are reasonably accurate, our strategy is to change hints only when necessary. In other words, correct hints are left untouched and incorrect hints are changed when correct information is made available.

To keep hints as correct as possible, we introduce the concept of a master copy of a block. The first copy of a block to be cached by any client is called the master copy. The master copy of a block is distinct from the block's other copies because the master copy is obtained from the server.

Based on this concept of the master copy, we enumerate two simple rules for hint maintenance:

1. When a client obtains a token for a file from a manager, it is also given a set of hints that contain the probable location of the master copy for each block in the file. The manager obtains the set of hints for the file from the last client to acquire a token for the file, because the last client is likely to have the most accurate hints.

2. When a client forwards a master copy of a block to a second client, both the clients now update their hints to show that the probable location of the master copy of the block is the second client.

The hints only contain the probable locations of the master copy and hence, we ignore the changes to the locations of the other copies of the block. This simplifies the task of keeping the hints accurate.

4.1.2 Lookup Mechanism

Given hints about the probable location of the master copy of a block, the lookup mechanism must ensure that a block lookup is successful, regardless of whether the hints are right or wrong. Fortunately, as all block writes go through to the server, it always has a valid copy of a block and can satisfy requests for the block should the hints prove

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 6

Page 7: Iaetsd decentralized coordinated cooperative cache

false. This simplifies the lookup mechanism which is outlined below:

1. When a client has a local cache miss for a block, it consults its hint information for the block.

2. If the hint information contains the probable location for the master copy of the block, the client forwards its request to this location. Otherwise, the request is sent to the server.

3. The client which receives a forwarded request for a block consults its hint information for the block and proceeds to Step 2.

The general idea is that each client keeps track of the probable location of the master copy of each block, and uses this information to lookup blocks in the cooperative cache.

4.2 Forwarding

When a block is ejected from the local cache of a client, the cooperative cache replacement policy decides whether or not the block should be forwarded to the cooperative cache. As discussed earlier, one of the motivations of the replacement policy is to ensure that only one copy of a block is stored in the cooperative cache. If not, the cooperative cache will contain unnecessary duplicate copies of the same block.

The previous algorithms rely on the manager to determine whether or not a block should be forwarded to the cooperative cache. A block is forwarded if it is the only copy of the block stored in either the local caches or the cooperative cache. Maintaining this invariant is expensive, however, requiring an N-chance client to contact the manager whenever it wishes to discard a block that is not known to be a singlet, and the GMS

manager to contact a client whenever a block becomes a singlet.

To avoid these overheads, we propose a forwarding mechanism in which the copy to be forwarded to the cooperative cache is predetermined and does not require communication between the clients and the manager. In particular, only the master copy of a block is forwarded to the cooperative cache, while all other copies are discarded. Since only master copies are forwarded, and each block has only one master copy, there can be at most one copy of a block in the cooperative cache.

A potential drawback of the master copy algorithm is that it has a different forwarding behavior than the previous algorithms. Instead of forwarding the last local copy of a block as in GMS or N-chance, the master copy algorithm forwards the first or master copy. In some cases, this may lead to unnecessary forwardings. A block which is deleted before it is down to its last copy should not have been forwarded to the cooperative cache. The existing algorithms avoid this, while the master copy algorithm will potentially forward the block. Fortunately, our measurements show that few of the master copy forwardings are unnecessary, as described in Section 6.3.

4.3 Best-Guess Replacement

Once the replacement policy has decided to forward a block to the cooperative cache, it must decide the target client of this forwarding. Forwarding a block to this target client will replace a block that is either in the client's local cache or in the cooperative cache. Note that this replaced block can be a master copy, providing a means for removing master copies from the cooperative cache.

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 7

Page 8: Iaetsd decentralized coordinated cooperative cache

Previous algorithms either choose the client at random, or rely on information from the manager to select the target. An alternative based on hints, however, can provide highly accurate replacements without requiring a centralized manager. We refer to this as best-guess replacement because each client chooses a target client that it believes has the system's oldest block. The objective is to approximate global LRU, without requiring a centralized manager or excessive communication between the clients. The challenge is that the block age information is distributed among all the clients, making it expensive to determine the current globally LRU block.

In best-guess replacement, each client maintains an oldest block list that contains what the client believes to be the oldest block on each client along with its age. This list is sorted by age. A block is forwarded to the client that has the oldest block in the oldest block list.

The high accuracy of best-guess replacement comes from exchanging information about the status of each client. When a block is forwarded from one client to another, both clients exchange the age of their current oldest block, allowing each client to update its oldest block list. The exchange of block age information during replacement allows both active clients (clients that are accessing the cooperative cache) and idle clients (clients that are not) to maintain accurate oldest block lists. Active clients have accurate lists because they frequently forward blocks. Idle clients will be the targets of the forwardings, keeping their lists up-to-date as well. Active clients will also tend to have young blocks, preventing other clients from forwarding blocks to them. In contrast, idle clients will tend to accumulate old blocks and therefore be the target of most forwarding requests.

Changes in the behavior of a client may cause the oldest block lists to become temporarily inaccurate. An active client that becomes idle will initially not be forwarded blocks, but its oldest block will age relative to the other blocks in the system. Eventually this block will be the oldest in the oldest block lists on other clients and be used for replacement. On the other hand, an idle client that becomes active will initially have an up-to-date list because of the blocks it was forwarded while idle. This allows it to accurately forward blocks. Other clients may erroneously forward blocks to the newly-active client but once they do, their updated oldest block lists will prevent them from making the same mistake twice.

Although trace-driven simulation has shown this simple algorithm to work well, there are several potential problems, including the effect of replacing a block that is not the globally LRU block and also the problem of overloading a client with simultaneous replacements.

First, since global state information is not maintained, it is possible for a client to replace a block that is not the globally LRU block. However, if the replaced block is close to the globally LRU block, the performance impact of not choosing the globally LRU block is minimal. In addition, the next section discusses a mechanism for masking any deviations from the globally LRU block.

Second, if several clients believe that the same client has the oldest block, they will all forward their blocks to that client, potentially overloading it. Fortunately, it can be shown that it is highly unlikely that the clients using the cooperative cache would forward their blocks to the same target. This is because clients that do forward their blocks to the same target will receive

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 8

Page 9: Iaetsd decentralized coordinated cooperative cache

different ages for the oldest block on the target, since each forwarded block replaces a different oldest block. Over time, the clients' oldest block lists will contain different block age information even if they start out identical, reducing the probability of always choosing the same forwarding targets.

4.4 Discard Cache

One drawback of best-guess replacement is that erroneous replacements will occur. A block may be forwarded to a client that does not have the oldest block; indeed, a block may be forwarded to a client whose oldest block is actually younger than the forwarded block.

To offset these mistakes we introduce the notion of a discard cache one that is used to hold possible replacement mistakes and thus increase the overall cache hit rate of the system. The simple algorithm used to determine whether a block is mistakenly replaced and should be sent to the discard cache is shown in Table 1. As is evident, non-master copies are always discarded because only master copies are accessed in the cooperative cache.

Replacements are considered to be in error when the target client of a replacement decides that the block is too young to be replaced. A client chooses to replace a block on a particular target client because it believes that client contains the oldest block. The target client considers the replacement to be in error if it does not agree with this assessment. The target determines this by comparing the replaced block's age with the ages of the blocks on its oldest block list. If the block is younger than any of the blocks on the list, then the replacement is deemed an error and the block is forwarded to the discard cache. Otherwise, the block is discarded.

The blocks in the discard cache are replaced in global LRU order. Thus the discard cache serves as a buffer to hold potential replacement mistakes. This extends the lifetimes of the blocks and reduces the number of erroneous replacements that result in an expensive disk access.

Type of block Action Non-master copy Discard Old master copy Discard Young master copy Send to discard cache

Table 1:Discard Cache Policy. This table lists how the hint-based replacement policy decides which blocks to send to the discard cache. A master copy is old if it is older than all blocks in the oldest block list, else it is considered young. The oldest block list is the per-client list that contains what the client believes to be the oldest block on each client along with its age.

4.5 Cache Consistency

The use of hints for block lookup raises the issue of maintaining cache consistency. One solution is to use block-based consistency, but this would require contacting the manager on every local cache miss to locate an up-to-date copy, making it pointless to use hints for block lookup or replacement. For this reason, we propose the use of a file-based consistency mechanism. Clients must acquire a token from the manager prior to accessing a file. The manager controls the file tokens, revoking them as necessary to ensure consistency. The token includes version numbers for all the file's blocks, allowing copies of the blocks to be validated individually. Once a client has the file's token it may access the the file's blocks without involving the manager, enabling the use of hints to locate and replace blocks in the cooperative cache.

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 9

Page 10: Iaetsd decentralized coordinated cooperative cache

6. Conclusion Cooperative caching is a technique that allows clients to access blocks stored in the memory of other clients. This enables some of the local cache misses to be handled by other clients, offloading the server and improving the performance of the system. However, cooperative caching requires some level of coordination between the clients to maximize the overall system performance. Previous cooperative caching algorithms achieved this coordination by maintaining global information about the system state. This paper shows that allowing clients to make local decisions based on hints performs as well as the previous algorithms, while requiring less overhead. The hint-based algorithm's block access times are as good as those of the previous and ideal algorithms 7.References [1] C. Aggarwal, J.L. Wolf, and P.S. Yu, “Caching on the World Wide Web,” IEEE Trans. Knowledge and Data Eng., vol. 11, no. 1, pp. 94-107, Jan./Feb. 1999 [2]. Denko, M.K., Tian, J.,Cross-Layer Design for Cooperative Caching in Mobile Ad Hoc Networks, Proc .of IEEE Consumer Communications and Networking Conf( 2008). [3]. L. Yin, G. Cao: Supporting cooperative caching in ad hoc networks, IEEE Transactions on Mobile Computing, 5(1):77-89( 2006). [4]. Chand, N. Joshi R.C., and Misra, M., Efficient Cooperative Caching in Ad Hoc Networks Communication System Software and Middleware.(2006). [5] S. Lim, W. C. Lee, G. Cao, C. R. Das: A novel caching scheme for internet based mobile ad hoc networks. Proc .12th Int. Conf. Computer Comm. Networks (ICCCN 2003), 38-43 ( Oct. 2003).

[6] Narottam Chand, R.C. Joshi and Manoj Misra, "Cooperative Caching Strategy in Mobile Ad Hoc Networks Based on Clusters," International Journal of Wireless Personal Communications special issue on Cooperation in Wireless Networks, Vol. 43, Issue 1, pp. 41-63, Oct 2007 [7] Li, W., Chan, E., & Chen, D. (2007). Energy- efficient cache replacement policies for cooperative caching in mobile ad hoc network. In Proceedings of the IEEE WCNC (pp3349–3354). [8]. B. Z heng, J. Xu, and D. L ee. Cache invalidation and replacement strategies for location dependent data in mobile environments,. IEEE Transactions on Computers, 51(10) : 1141–1153, October 2002. [9]. Mary Magdalene Jane.F, Yaser Nouh and R. Nadarajan,“Network Distance Based Cache Replacement Policy for Location-Dependent Data in Mobile Environment”, Proceedings of the 2008 Ninth International Conference on Mobile Data Management Workshops ,IEEE Computer Society Washington,DC,USA,2008. [10].Kumar, A., Sarje, A.K. and Misra, M. ‘Prioritised Predicted Region based Cache Replacement Policy for location dependent data in mobile environment’, Int. J. Ad Hoc and Ubiquitous Computing , Vol. 5, No. 1, (2010) pp.56–67.[11]. B. Zheng, J. Xu, and D. L. Lee. cache invalidation and replacement strategies for location-dependent data in mobile environments. IEEE Trans. on Comp, 51(10):14–21, 2002. [12] Q. Ren and M. Dhunham. Using semantic caching to manage location dependent data in mobile computing. Proc. Of ACM/IEEE MobiCom, 99:210–221, 2000.

INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ENGINEERING RESEARCH, ICDER - 2014

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT www.iaetsd.in 10