reputation bootstrapping for trust establishment among web ... · reputation bootstrapping, i.e.,...

13
Reputation Bootstrapping for Trust Establishment among Web Services Zaki Malik Department of Computer Science Virginia Tech, VA 24061. USA. [email protected] Athman Bouguettaya Department of Computer Science Virginia Tech, VA 24061. USA. and CSIRO-ICT Center Canberra, Australia. [email protected] 1

Upload: phamthien

Post on 18-Feb-2019

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

Reputation Bootstrapping for Trust Establishment amongWeb Services

Zaki MalikDepartment of Computer Science

Virginia Tech, VA 24061. USA.

[email protected]

Athman BouguettayaDepartment of Computer Science

Virginia Tech, VA 24061. USA.

and

CSIRO-ICT Center

Canberra, Australia.

[email protected]

1

Page 2: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

Abstract

Reputation systems rely on past information to establish trust among unknown participants.Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services(newcomers), is a major issue in service-oriented environments as no historical informationmay be present about newcomers. We present different techniques to bootstrap the repu-tation of newcomers in a service-oriented environment in a fair and accurate manner. Wealso present experiment results that evaluate the proposed techniques.

1 Introduction

With the introduction of Web services, the World Wide Web is shifting from being merelya repository of data, to an environment (dubbed the “service Web”) where applications canbe automatically invoked by Web users or other applications. A Web service is defined asa self-describing software application that can be advertised, located, and used across theWeb using a set of standards [2]. It is expected that enterprizes in the new service Webwould no longer represent single monolithic organizations, but rather be a loose coupling ofsmaller applications offered by autonomous providers as Web services. However, there is agrowing consensus that the Web service ‘revolution’ would not eventuate until trust relatedissues are resolved [10]. For instance, in many emerging applications, Web services willhave to automatically (with minimal or no human intervention) determine to which extentthey may trust other services before they interact with them. This is somewhat similar todecisions made by the participants of social networks when they interact with previouslyunknown entities. In this respect, reputation is regarded as a predictor of future behavior.Social networks participants use it to ascertain an entity’s trustworthiness [11, 1]. It hasbeen shown that in Web interactions, reliable reputation systems increase the users’ trust,consequently stimulating sales [12]. Examples include eBay, Amazon, etc.

Reputation is a subjective assessment of a characteristic or an attribute ascribed to oneentity by another based on observations or past experiences [13]. In social networks, expe-riences from more than one source are assimilated to derive the reputation. Similarly, inservice-oriented environments, we refer to the aggregated perceptions that the community ofservice requesters have for a given Web service as service reputation. However, requesters’perceptions may not be always available. For example, when a service is initially registeredfor business, no service has interacted with it and there is no record of its past behavior.Consequently, its reputation cannot be assessed and the question about its trust is left unan-swered, which may translate to overlooking the service for future transactions. Therefore,mechanisms need to be defined that assign reputation for newly deployed services even whenno historical information of their behavior is present, so that they can compete with existingservices for market share. This problem is referred to as the “reputation bootstrapping”problem.

2

Page 3: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

In service-oriented environments where honest and malicious service providers co-exist,finding the exact balance between fairness and accuracy for reputation bootstrapping isnon-trivial. For instance, a malicious service provider may attempt to clear its (negative)reputation history by discarding its original identity and entering the system with a newone. In contrast, a service provider may be entering the system for the first time withoutany malicious motives. In this paper we propose a reputation bootstrapping model thatis accurate (i.e., the newcomer is assigned an initial reputation that it actually deserves)and fair to both the existing services and the newcomers (i.e., no participant is wrongfullydisadvantaged).

2 Related Work

Reputation is a social concept, and its applicability in developing communities or maintain-ing relationships in social networks has been thoroughly studied [11, 14, 6]. The computerscience literature extends and builds upon this study of reputation to theoretical areas andpractical applications, as a means to establish trust [3]. However, most of these works havefocused on solutions for reputations storage, collection and aggregation, incentives-basedschemes to encourage feedback reporting, etc. Little attention has been given to the boot-strapping problem, and majority of the proposed solutions assume a “running-system,”where reputations already exist [1]. We focus on reputation bootstrapping, and assumethat existing solutions for other facets of reputation management mentioned above are ad-equate. Comprehensive literature reviews are available in [6, 13, 1]. In what follows, wegive a brief overview of the existing solutions for reputation bootstrapping. Since socialnetworks, agent-based communities, auction portals, and P2P systems employ similar andusually overlapping strategies, we provide a generalized discussion of the solutions.

The approaches that consider the bootstrapping problem often adopt solutions that maynot be fair to all system participants. For instance, a popular approach is based on assign-ing neutral or default reputation values to newly deployed participants, or newcomers [6].This approach has the disadvantage that it can either favor existing participants or favornewcomers. If the initial reputation is set high, existing participants are disadvantaged, asthe newcomer would get preference over existing participants who may have worked hardto attain their reputation. This encourages malicious providers to deploy new identitiesperiodically for “white-washing” their bad reputation record. Thus, [15] states that “pun-ishing,” i.e., assigning a low initial reputation is the best alternative. However, existingparticipants are privileged if low initial values are assigned, as a newcomer may not be ableto win a consumer’s favor with its low reputation [7, 13].

To the best of our knowledge, only a couple of previous works have attempted to solvethe bootstrapping problem without using a default value. For example, the endorsementprinciple proposed in [7] states that a participant with unknown reputation may acquirereputation through the endorsement of other trusted participants (ones with high credi-

3

Page 4: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

bility), and the endorsee’s actions directly affect the endorser’s credibility. However, thistechnique may prove to be problematic as it would not be easy for a newcomer to get itselfendorsed by an existing participant. The technique proposed in [4], aggregates all trans-action information on first-time interactions with newcomers. The aggregate informationenables a consumer to calculate the probability of being cheated by the next newcomer.This adapts well to the current rate of white-washing in the system. Our approach is in-spired by this [4] technique. However, our approach differs from this work in that we donot assume that peer services can monitor each other’s interactions. We believe that sucha simplifying assumption is unrealistic for the service Web. The expanse of the service Weband privacy considerations are major impediments in this regard.

3 Reputation Bootstrapping Model

A Web service exposes an interface describing a collection of operations that are network-accessible through standardized XML messaging [10]. We propose to extend the traditional(publish-discover-access) Web service model, and introduce the concept of community toaid in the bootstrapping process. A community is a “container” that groups Web servicesrelated to a specific area of interest (e.g., auto makers, car dealers) together. Communi-ties provide descriptions of desired services (e.g., providing interfaces for services) withoutreferring to any actual service. Ontologies are used to serve as templates for describingcommunities and Web services. An ontology typically consists of a hierarchical descriptionof important concepts in a domain, along with descriptions of their properties. The notionof concept in ontologies is similar to the notion of class in object-oriented programming.Each concept ci has a set of properties Pi = {pi1, ..., pim} associated with it that describethe different features of the class. An ontology relates classes to each other through ontologyrelationships. Examples of relationships include “subclassof”, “superclassof”. Communi-ties are defined by community providers as instances of the community ontology (i.e., theyassign values to the concepts of the ontology). Community providers are generally groupsof government agencies, non-profit organizations, and businesses that share a common do-main of interest. In our model, a community is itself a service that is created, advertised,discovered, and invoked as a regular Web service, so that it can be discovered by serviceproviders. Service providers identify the community of interest and register their serviceswith it. We use the Web Ontology Language (OWL) for describing the proposed ontology.However, other Web ontology standards could also be used. Further details on the use ofontologies for describing communities can be found in [2].

In our model, Web services in a particular domain (registered with the same commu-nity) may aid each other in assessing the initial reputation of a newcomer. We proposetwo reputation bootstrapping approaches. The first approach relies on the cooperationamong services, which computes the reputation of newcomers in a P2P manner. The sec-ond approach functions under a “super-peer” topology where the community provider is

4

Page 5: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

responsible for assigning the newcomer’s reputation. Details of the two approaches follow.

Approach I: Adapting Initial Reputation to Majority Behavior

We propose a reputation bootstrapping technique that adapts according to the behavior ofmajority of services. Our approach is inspired by the technique presented in [4] for P2Psystems. However, unlike [4] we do not assume that peer services can monitor each other’sinteractions. Hence, our model does not support a “shared” service interactions history,and the sharing of interaction histories is left at the discretion of the participating services.Also, we do not assume reciprocative actions for Web services (they may engage in one-offtransactions). Moreover, we provide options to bootstrap the reputation of newcomers incases where no service is willing to share its interaction history.

In Figure 1, we provide the stepwise details of our proposed framework. When a new-comer registers with a community to offer it’s services, it has no reputation record available(Step 1 to 3 in Figure 1). Under the proposed mechanism, the consumer can bootstrapthe newcomer’s reputation at this point according to the rate of maliciousness in the com-munity. The rate of maliciousness (denoted <) is defined as the ratio of the number oftransactions where the providers defect, to the total number of transactions. Thus, < liesin the range [0, 1]. A provider’s “defection” is measured after each individual transaction,by the service consumer (denoted rater). If the provider performs satisfactorily in the trans-action, the rater can label the transaction as “acceptable.” Alternatively, the transaction islabelled as “defective.” Thus, defection (denoted D) can be represented as a binary. Sinceservice raters can differ in their total number of transactions, and the number of defectivetransactions experienced, we can expect a variation in the value of < across different serviceraters. In essence, the value of < would depend on each rater’s personal experience, andthe manner in which it estimates D after each transaction.

The basic idea of the proposed scheme is for the consumer to assign a high initialreputation value when < is low, and a low reputation value when < is high. This allowsthe consumer to adapt to the current state of the system (i.e., defective vs. acceptabletransactions). Formally, for each service consumer i, < is defined as:

<i =Di

Ti(1)

where Di is the number of transactions where providers have defected for consumer i, and Ti

is the total number of transactions that consumer i has undertaken. Note that in defining <,we use a rater’s complete transaction record, instead of using only the transactions conductedwith newcomers. Since dishonest behavior is a prerequisite of white-washing, estimating <over all the rater transactions ensures that < will increase with every defective transaction.This in turn brings the reputation-bootstrap value for a newcomer down. Moreover, sincethe severity of defective transactions varies, the service rater can assign relative weights tothe transactions. For example, in a “high impact” transaction where the consumer suffers

5

Page 6: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

a huge loss as a consequence of the provider’s defection, the consumer may count two (ormore) defective transactions instead of one (while increasing the Ti count by only one), toincrease <i. The assignment of such weights is left at the discretion of the service rater.

����������� � ����� ����� ����� � ��� � ��� �

��

� � ���"! #� �� �$ ! � � �&%

'

(

) *�+", -/. *"01

) *"2�34. 56.&, 7"89 8�:"3�, 0 ;< 7�) *�=>7"0@?AB�5", C 5"D�C * E *�F@*�=/.�G6H�7�. 5"C9 8�:"3�, 0 ;

I

JK6LNM O>O�P Q>O6R4S QTS UVQXWYR4S Z [6\^]`_ba c/a dfe>g4hVije k�lm�L>n oqpV[6P@rV[�P R/S Qts�Q6P \q[6o>R�u u vTw x>\`Q�P yVQ>z|{�k�e } i�~n �4MYy/R6Z u R6x6u Q��L>n ��{�k�e } i"����� ��� �`�&�`������u \VQ����j���b�>�X�t���6� � �����f�/��� ���`�

� ��� �`  ¡¢ �

£b¤

A�-6¥X) *�2�34.�54.f, 7"8£¦

) *�+", -/.�0 5�.�, 7"8) *4B�*"8�3�*tA�?�?�*�?. 7�§�B�5"C 3�5�.�, 7"8A�=6=67Y3�84.¨

1�¤

§�B�5"C 3�56.�, 7"8©"*�*«ª^¬�5"0 +�*�?E *�F@5�3�C .

§�B�5"C 3�54.�*­

A -6-�, +"8�, 8�+9 8�, .&, 5"C)�*"2�34.�5�.�, 7"8

®

) *"2�34. 5�.�, 7"8ª�7"¯«¯«3�8�, =65�. *�?

°b¦°b¤

Figure 1: Reputation Bootstrapping Using an Adaptive Approach

Obtaining Di and Ti from the raters poses a privacy risk as the total transactions volume,transactions identification (defect or not and with which provider), etc. may reveal sensitiveinformation about the raters. Thus, in our model only <i is shared between the differentconsumers, instead of Di and Ti. The consumer can then aggregate the different ratioscollected from the raters. A simplistic solution is to compute a weighted average of all <i’s(Step 4 in Figure 1). The weights are decided according to the credibility of the contributingrater, which may be calculated using techniques similar to ones discussed in [6, 1]. In Step5, all <i’s are aggregated and the consumer decides whether to interact with the newcomerin Step 6a. If sufficient ratings are not submitted in Step 4, the consumer may turn to thecommunity provider for bootstrapping the newcomer’s reputation (Step 6b in Figure 1).The community provider may provide the service for free, or charge a nominal fee (Step 7ain Figure 1) for assigning the initial reputation (Step 7b in Figure 1). The initial reputationvalue is then communicated to the consumer in Step 8, and the consumer can make theinteraction decision (Step 9). Thus, even in the absence of sufficient <i, the consumer isstill able to bootstrap the newcomer’s reputation. Details of Step 7b follow.

Approach II: Assigned Initial Reputation

The community provider may employ one of two ways in bootstrapping the newcomer’sreputation: assign a default value, or evaluate the newcomer (Case I and II in Figure 2).Approach II - Case I: Default Initial Reputation

We extend the model presented in Figure 1, to show how a default value may be assigned asthe newcomer’s reputation. Upon registration, the newcomer may present some credentialsthat enable it to buy initial reputation from the community provider. The newcomer may

6

Page 7: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

belong to the same service provider group as an existing reputable service. In this case,presenting the authenticated credentials of the existing service may guarantee initial reputa-tion equal to that of the existing service. This is shown in steps a, b and c in Figure 2-CaseI. Endorsement techniques [7] can also be incorporated in this strategy where newcomersmay present the credentials of any service that is willing to endorse them. Alternatively,an average of all providers’ reputation may be assigned. In [4], it was shown that such anaveraging technique provides the best results in terms of fairness and accuracy. Withoutdelving into much detail, we also adopt an average model in this case.

����������� � �

����������������������� ��� �� �! ����"���#������$�! �%�

&('*)�+ ,.- '�/$01+ -�2'�354�6�/ ,*'�/ 7 ,8 /�'�4�'�39-�+ :�; ,

�<���=�>��� ���?��$@BA�������C

D

&<'*)�+ ,9-�/ :$- + 6�3E&('$F5'�3�G*'IH�4�4�'*4B-J6K�F*:�; G*:L-�+ 6�3MH 8N8 6�G�39-

��������� ����� �� �O ���"���#������$�O �%��PRQ���� ���*�! ��

&�'*)�+ ,9- '�/

PRS� �$�! ��T�U���C Q� A*���

K�F*:�; G*:L- 6�/ ,V�2*6�,5'�3 W + ,�X*6*,L:�Y�; '

H 858 6�G$39-J,V�/�'�:L-�'*4

H 8.8 6�G�39-Z 35[�6�\

KF*:�; G*:$- + 6�3 ] 6�/_^E:�;`�/ :�35,*: 8 - + 6�39,

Z -J'�^ W '�; + F*'�/ a

�bC_��c ��dMT�����A*�

����� Q���Ce�dMT�����A*� Z - '�^f&('*- G�/_3

W '�; + F*'�/ a�gR&('$- G�/_3hi- :5- G*,

jlk kNm�nLo�p*q p r p n5s

tu

v

w

xy

z

{

t9|

t�t}�Y

}�:

?���@IA������C

P�Si �$�! ��T�U���C_QR A*���

:~

�1'L-�'�354�6�/ ,9'�/_7 ,/�'�X�G9-J:L-�+ 6�3

P%��c��%C ����CH�F*'�/�:*)�'�&�'�X�G9- :L-�+ 6�3EVl:�; 8 G�; :L- + 6�3

&�'*)�+ ,.- '�/

�b'L-i/�'�X�G9- :L- + 6�3[ 6�/�:�; ;�,*'�/ F�+ 8 '*,

YH�,5,�+ )�3 ] '90 8 6�^E'�/&('�X�G9- :L- + 6�38��

t

u

v

v

w

Figure 2: Reputation Bootstrapping through Assignment

Approach II - Case II: Initial Reputation Evaluation

In the second case, the community provider may be asked to evaluate the newcomer. Theevaluation period is determined by the community provider, and the newcomer has noknowledge of the time and number of transactions conducted during the evaluation period.It may happen that normal transactions by other consumers (who may have ascertained thenewcomer’s reputation through Option-I) are also conducted during the evaluation period.

Services with high credibilities (known as elders) are asked to evaluate the newcomer.The feedbacks are weighed according to elder credibilities which are assessed using separate

7

Page 8: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

techniques [6, 1]. We assume that some form of incentives (monetary or reputation) arepresent for services to act as evaluators (Step 1 to 3 in Figure 2-Case II). In some situations,the evaluators may be required to pay for the services of the newcomers. “Credit agencies”may aid in such situations so that an evaluator does not suffer a monetary loss by getting aservice it did not need in the first place. The evaluators send their “original” account detailsto the community provider (Step 4 in Figure 2-Case II). The community provider informsthe credit agency of the evaluation transaction, and the agency is expected to respondwith a “disposable” account number which is expected to last for only one transaction(Step 5 in Figure 2-Case II). This is analogous to the service started by American Express,where disposable account numbers were assigned to users wishing to engage in e-commercetransactions without disclosing their actual account details [9]. The community providercommunicates the generated account number (acc.no.) to the chosen evaluator, to cover theinvocation cost (Step 6 in Figure 2-Case II). The evaluators collect the required data for thenewcomer (Step 8 in Figure 2-Case II) through interactions. Post-transaction completion,the newcomer is informed by the credit agency about the “fake” charged account (Step9a in Figure 2-Case II), and any tangible items delivered are returned by the evaluators(Step 8 to 11 in Figure 2-Case II). In cases where the evaluator defaults by not sendingthe product back in the desired amount of time, the community provider can charge theevaluator’s original account and pay the newcomer. Since the newcomer has no knowledgeof the authenticity of the account prior to, or during the transaction, it is expected tobehave normally, i.e., the newcomer cannot “pretend” to act fairly to mislead evaluation.

4 Experiments

We simulate interactions on the service Web using 100 Web services that interact with eachother over a period of 100 transactions. The default value for < in the community is 10%with the number of credible raters (C) being greater than 60%.

In the first experiment scenario, we compare our proposed “Adaptive” technique withthe existing “Punishing” approach (see Section 2), and the case where each newcomer isunconditionally trusted (denoted “None”). We use the average bootstrap success rate ofthe community to measure each technique’s effectiveness. Bootstrap success rate (BSR)for each individual service is defined as the difference between the initial assigned reputa-tion (using the respective technique) and the actual reputation (calculated post-transactioncompletion). In this scenario, we vary the rate of maliciousness (<) in the community insteps of 10%, and assume that the honesty of newcomers is proportional to <. Note thatwhen each newcomer is assigned a high reputation, services are likely to defect, and thenwhitewash their reputation without any penalties. The “None”-plot in Figure 3 representsthis behavior. As < increases in the community, BSR drops. When < is greater than 50%,BSR reaches the minimal value since a service consumer is not likely to get the service itrequires. In case of “Punishing” each newcomer, there is a possibility of putting the legit-

8

Page 9: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

imate newcomers (non-whitewashers) at a disadvantage. This is shown by the initial dropin the BSR values, where majority of the newcomers are assigned low reputations, when infact they perform honestly. However, the punishing-approach works well for increasing <.This is consistent with the previous work [15]. In contrast, the proposed adaptive approachworks well for all cases of <. Note that when < is neither too high nor too low, the adap-tive technique suffers in terms of accuracy. However, even in this worst case, the adaptivetechnique works as effectively as (or better than) other techniques.

Comparing Bootstrapping Approaches

0

0.5

1

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Rate of Maliciousness

Bo

ots

trap

Su

cces

s R

ate

None

Punishing

Adaptive

Figure 3: Comparing Bootstrapping Approaches with Increasing <

In the second experiment scenario, we vary the credibilities of raters (in reporting <i)to study their effect on the accuracy of the proposed technique. We observe the differencesbetween the actual < and the calculated < (obtained through rater testimonies) as anaccuracy measure. We increase < in a linear manner with the defection in the communityincreasing at each time instance. Moreover, all the dishonest raters report values that differby at least three points from the observed values, e.g. on a scale of 1 to 10, if 2 is observedthen a value of 5 or higher is reported, for an actual value of 4 the reported values could be1 and lower, or 7 and higher, etc.

Figure 4 shows the comparisons between the actual community ratios and adaptiveratios for different rater credibilities. We can see that for the first three cases, i.e., when10%, 20%, and 40% of the raters are dishonest, the calculated < is close to the actualmaliciousness ratio, and the effects of dishonest reports are diluted. On the other hand,when 60% or higher percentage of raters are dishonest, there is considerable deviation interms of accuracy.

The percentage of dishonest raters cannot be exactly determined as it is a highly dynamicphenomenon. Evidence from previous empirical studies of eBay’s reputation system (oneof the most widely used reputation systems), suggests that almost 99% of the raters stayhonest in the system and provide positive ratings to the providers [12]. Although such ahigh percentage may be attributed to the nature of eBay’s business model (auction) or itsreputation model (both parties rate each other), we believe it provides a rough guideline ofthe number of credible raters in a ratings-based reputation community. Some authors havealso proposed approaches for eliciting honest ratings [5, 8]. We conclude that the proposed

9

Page 10: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

adaptive technique for reputation bootstrapping can control some amount of dishonestyon part of raters, but in situations where more than 40% of the raters are dishonest, analternate reputation bootstrapping mechanism may be required.

1 0 % Raters are Dishonest

0

0.1

0.2

0.3

0.40.5

0.6

0.7

0.8

0.9

1

1 11 21 31 41 51 61 71 81 91

Defective Transactions

Def

ectIv

e R

atio

2 0 % Raters are Dishonest

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 11 21 31 41 51 61 71 81 91

Defective Transactions

Def

ectiv

e R

atio

4 0 % Raters are Dishonest

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 11 21 31 41 51 61 71 81 91

Defective Transactions

Def

ectiv

e R

atio

6 0 % Raters are Dishonest

0

0.1

0.20.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 11 21 31 41 51 61 71 81 91

Defective Transactions

Def

ectiv

e R

atio

8 0 % Raters are Dishonest

0

0.1

0.2

0.30.4

0.5

0.6

0.7

0.8

0.9

1

1 11 21 31 41 51 61 71 81 91

Defective TransactionsD

efec

tive

Rat

io

�����������

� � ������������������� � ������� � �

�!��"����!"�#������$������� � ������� � �% �#���&'� � ()"*+&�&�, �����-!.

Figure 4: Ratio Comparisons with Different Number of Dishonest Raters for the AdaptiveApproach

Figure 5 shows the reputation values assigned by the community provider for one singleservice provider using different techniques. For clarity, only the first ten time instances areshown. We show the assigned values for: two constant values (a high one of 7.5 and a lowone of 2), average reputation, reputation through referrals, and show how they relate tothe actual provider reputations collected through evaluation. Figure 5 illustrates the caseof an option that is fair to the newcomer in one instance but that may prove not to be sofor the next instance. For example, consider the third time instance where the provider’sreputation is evaluated to be 5. If the newcomer is assigned a high constant (7.5), thenexisting services have the incentive to white-wash their low reputation for a new high one.However, if a low constant (2) value is assigned, then the new service will be disadvantageddespite the white-wash incentive being removed. Similarly, if the average reputation wasassigned, it would be 3.3 (way below than the provider’s actual behavior). Bootstrappingreputation through referrals provides the closest results, but in some situations it is alsonot accurate. Moreover, referrals are expected to be scarce (due to market competition)in service-oriented environments. Thus, we conclude that evaluation provides a suitablealternative for bootstrapping the newcomer’s reputation through assignment.

5 Conclusion

We have provided two techniques for bootstrapping the reputation of newly deployed Webservices (newcomers). The first technique proposes to use the dishonest transactions ratioto guide the service consumer in initializing service reputations. The second techniqueproposes to obtain help from the community providers in assigning an initial reputation forthe newcomers. Through experimental evidence we conclude that automated reputationbootstrapping can only be efficiently employed in cases where less than 40% of raters are

10

Page 11: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

Bootstrapping through Assignment

0

2

4

6

8

10

1 2 3 4 5 6 7 8 9 10

Time Instance

Rep

uta

tio

n

Average Constant High Constant Low Referral Evaluation

Figure 5: Bootstrapping Reputation through Assignment

dishonest. In situations where rater dishonesty is unavoidable, or situations where theconsumers are not restricted by time, evaluating the newcomer’s reputation proves to behighly accurate and is fair to all services involved.

Acknowledgement

This work is supported by the National Science Foundation grant number 0627469.

11

Page 12: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

References

[1] D. Artz and Y. Gil. A survey of trust in computer science and the semantic web. WebSemantics, 5(2):58–71, 2007.

[2] B. Benatallah, M. Dumas, Q. Z. Sheng, and A. H. Ngu. Declarative composition andpeer-to-peer provisioning of dynamic web services. In 18th International Conferenceon Data Engineering (ICDE), pages 297–308, 2002.

[3] C. Dellarocas. The Digitalization of Word-of-Mouth: Promise and Challeges of OnlineFeedback Mechanisms. Management Science, October 2003.

[4] M. Feldman and J. Chuang. The evolution of cooperation under cheap pseudonyms. InSeventh IEEE International Conference on E-Commerce Technology, CeC 2005, pages284–291, 2005.

[5] R. Jurca and B. Faltings. An Incentive Compatible Reputation Mechanism. In Proc.of the 2003 IEEE Intl. Conf. on E-Commerce, pages 285–292, June 2003.

[6] S. Marti and H. Garcia-Molina. Taxonomy of trust: categorizing p2p reputation sys-tems. Computer Networks, 50(4):472–484, 2006.

[7] E. Michael Maximilien and Munindar P. Singh. Reputation and Endorsement for Webservices. SIGecom Exchanges, 3(1):24–31, December 2002.

[8] N. Miller, P. Resnick, and R. Zeckhauser. Eliciting honest feedback: The peer predic-tion method. Management Science, September 2005.

[9] CNet News. AmEx unveils disposable credit card numbers. http://news.com.com/2100-1017-245428.html, 2006.

[10] M.P. Papazoglou and D. Georgakopoulos. Serive-Oriented Computing. Communcica-tions of the ACM, 46(10):25–65, 2003.

[11] J. M. Pujol, R. Sanguesa, and J. Delgado. Extracting Reputation in Multi-agentSystems by Means of Social Network Topology. In Proc. of the 1st International JointConference on Autonomous Agents and Multiagent Systems (AAMAS), pages 467–474,Bologna, Italy, 2002.

[12] P. Resnick and R. Zeckhauser. Trust Among Strangers in Internet Transactions: Em-pirical Analysis of eBay’s Reputation System. The Economics of the Internet andE-Commerce, Ed. M. Baye, Advances in Applied Microeconomics, 11, 2002.

[13] Y. Wang and J. Vassileva. Toward Trust and Reputation Based Web Service Selection:A Survey. Multi-agent and Grid Systems Journal (MAGS), special Issue on ”Newtendencies on Web Services and Multi-agent Systems”, to appear, 2007.

12

Page 13: Reputation Bootstrapping for Trust Establishment among Web ... · Reputation bootstrapping, i.e., assessing the reputations of newly deployed Web services (newcomers), is a major

[14] B. Yu and M. P. Singh. Social Networks and Trust: Detecting Deception in ReputationManagement. In ACM AAMAS’03, July 14-18 2003.

[15] G. Zacharia, A. Moukas, and P. Maes. Collaborative Reputation Mechanisms in Elec-tronic Marketplaces. In Proceedings of the 32nd Annual Hawaii International Confer-ence on System Sciences, 1999.

13