refining key establishmentpeople.inf.ethz.ch/csprenge/publications_files/ethz-cs-tr-736.pdfless...

13
Refining Key Establishment Christoph Sprenger Dept. of Computer Science ETH Zurich, Switzerland [email protected] David Basin Dept. of Computer Science ETH Zurich, Switzerland [email protected] ABSTRACT We use refinement to systematically develop a family of key establishment protocols using a theorem prover. Our devel- opment spans four levels of abstraction: abstract security properties, messageless guard protocols, protocols communi- cating over channels with security properties, and protocols secure with respect to a Dolev-Yao intruder. The protocols we develop are Needham-Schroeder Shared Key, the core of Kerberos 4 and 5, and Denning Sacco, and include realistic features such as key confirmation, replay caches, and en- crypted tickets. Our development highlights that message- less guard protocols provide a fundamental abstraction for bridging the gap between security properties and message- based protocol descriptions. It also shows that the refine- ment approach presented in [33] can be applied, with minor adaption, to families of key establishment protocols and that it scales to protocols of nontrivial size and complexity. 1. INTRODUCTION The fact that the development of even simple security pro- tocols is error prone motivates the use of formal methods to ensure their security. The past decade has witnessed signif- icant progress in post-hoc verification methods for protocol security based on model checking and theorem proving such as [3, 5, 17, 18]. However, methods for developing security protocols lag behind and protocol design remains more an art than a science. In our view, a development method should be systematic and hierarchical, meaning that the development is decom- posed into a number of smaller steps that are easy to un- derstand. These steps should span well-defined abstraction levels leading the developer from the requirements down to cryptographic protocols. The resulting protocols should be secure in well-established attacker models and such claims should ideally be supported by machine-checked formal proofs. Stepwise refinement provides such a hierarchical develop- ment method. However, most existing refinement-based ap- proaches to developing security protocols [10,11,14,16,19,24] fall short of one or more of these desiderata. We have recently proposed a method for developing secu- rity protocols by stepwise refinement so that they are correct by construction [33]. The method consists of a four-level re- finement strategy (summarized in Table 1), which allows the developer to build models that incrementally incorporate the system requirements and environment assumptions. Each Technical Report 736, Department of Computer Science, ETH Zurich, September 2011 name features L0 security properties global, protocol-independent L1 guard protocols roles, local store, no messages L2 channel protocols security channels, intruder L3 crypto protocols crypto, Dolev-Yao intruder Table 1: Refinement levels model may be considered as an idealized functionality for the subsequent refinements. Safety properties, once proved for a model, are preserved by further refinements. These in- clude (reachability-based) secrecy and authentication. We embedded this method in the theorem prover Isabelle/HOL and previously used it to derive some basic unilateral entity authentication protocols and a simplified Otway-Rees key transport protocol without key confirmation. In this paper, we show how to systematically develop an entire family of key transport protocols. This family con- sists of the Needham-Schroeder Shared-key protocol (NSSK), the core of Kerberos 4 and Kerberos 5, and the Denning- Sacco protocol. Compared to the protocols discussed in [33], these protocols are significantly more complex in size and message structure and exhibit a number of additional fea- tures and security properties such as the use of timestamps, replay protection caches, encrypted tickets (double encryp- tion), dynamically created communication channels, key con- firmation, key freshness, and key recentness. A central and novel feature of our approach is the use of guard protocols (L1) as an intermediate abstraction linking security properties (L0) and message-based protocols (L2-3). Guard protocols enable the straightforward abstract realiza- tion of security goals by adding security guards as necessary conditions for the execution of certain protocol steps. Dif- ferent kinds of security guards ensure the preservation of different properties such as secrecy, authentication, and re- centness. For example, key secrecy means that only autho- rized agents may know a key. Accordingly, steps of guard protocols where an agent A learns a key K contain a guard requiring that A is authorized to know K. For authenti- cation, there are guards ensuring that the local state of an agent (partially) agrees with the state of another agent. The security guards for secrecy and authentication com- municate with other agents by accessing their local stores. This abstraction simplifies proofs, but is not directly imple- mentable. Hence, we implement these guards at Level 2 by receiving messages on channels with intrinsic security prop- erties. The associated refinement proof naturally gives rise to invariants stating that the receiving of channel messages implies the security guards they implement. These invariants precisely state the security properties guaranteed by the mes-

Upload: others

Post on 24-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Refining Key Establishment

    Christoph SprengerDept. of Computer Science

    ETH Zurich, [email protected]

    David BasinDept. of Computer Science

    ETH Zurich, [email protected]

    ABSTRACTWe use refinement to systematically develop a family of keyestablishment protocols using a theorem prover. Our devel-opment spans four levels of abstraction: abstract securityproperties, messageless guard protocols, protocols communi-cating over channels with security properties, and protocolssecure with respect to a Dolev-Yao intruder. The protocolswe develop are Needham-Schroeder Shared Key, the core ofKerberos 4 and 5, and Denning Sacco, and include realisticfeatures such as key confirmation, replay caches, and en-crypted tickets. Our development highlights that message-less guard protocols provide a fundamental abstraction forbridging the gap between security properties and message-based protocol descriptions. It also shows that the refine-ment approach presented in [33] can be applied, with minoradaption, to families of key establishment protocols and thatit scales to protocols of nontrivial size and complexity.

    1. INTRODUCTIONThe fact that the development of even simple security pro-

    tocols is error prone motivates the use of formal methods toensure their security. The past decade has witnessed signif-icant progress in post-hoc verification methods for protocolsecurity based on model checking and theorem proving suchas [3, 5, 17, 18]. However, methods for developing securityprotocols lag behind and protocol design remains more anart than a science.

    In our view, a development method should be systematicand hierarchical, meaning that the development is decom-posed into a number of smaller steps that are easy to un-derstand. These steps should span well-defined abstractionlevels leading the developer from the requirements down tocryptographic protocols. The resulting protocols should besecure in well-established attacker models and such claimsshould ideally be supported by machine-checked formal proofs.Stepwise refinement provides such a hierarchical develop-ment method. However, most existing refinement-based ap-proaches to developing security protocols [10,11,14,16,19,24]fall short of one or more of these desiderata.

    We have recently proposed a method for developing secu-rity protocols by stepwise refinement so that they are correctby construction [33]. The method consists of a four-level re-finement strategy (summarized in Table 1), which allows thedeveloper to build models that incrementally incorporate thesystem requirements and environment assumptions. Each

    Technical Report 736, Department of Computer Science, ETH Zurich,September 2011

    name features

    L0 security properties global, protocol-independentL1 guard protocols roles, local store, no messagesL2 channel protocols security channels, intruderL3 crypto protocols crypto, Dolev-Yao intruder

    Table 1: Refinement levels

    model may be considered as an idealized functionality forthe subsequent refinements. Safety properties, once provedfor a model, are preserved by further refinements. These in-clude (reachability-based) secrecy and authentication. Weembedded this method in the theorem prover Isabelle/HOLand previously used it to derive some basic unilateral entityauthentication protocols and a simplified Otway-Rees keytransport protocol without key confirmation.

    In this paper, we show how to systematically develop anentire family of key transport protocols. This family con-sists of the Needham-Schroeder Shared-key protocol (NSSK),the core of Kerberos 4 and Kerberos 5, and the Denning-Sacco protocol. Compared to the protocols discussed in [33],these protocols are significantly more complex in size andmessage structure and exhibit a number of additional fea-tures and security properties such as the use of timestamps,replay protection caches, encrypted tickets (double encryp-tion), dynamically created communication channels, key con-firmation, key freshness, and key recentness.

    A central and novel feature of our approach is the use ofguard protocols (L1) as an intermediate abstraction linkingsecurity properties (L0) and message-based protocols (L2-3).Guard protocols enable the straightforward abstract realiza-tion of security goals by adding security guards as necessaryconditions for the execution of certain protocol steps. Dif-ferent kinds of security guards ensure the preservation ofdifferent properties such as secrecy, authentication, and re-centness. For example, key secrecy means that only autho-rized agents may know a key. Accordingly, steps of guardprotocols where an agent A learns a key K contain a guardrequiring that A is authorized to know K. For authenti-cation, there are guards ensuring that the local state of anagent (partially) agrees with the state of another agent.

    The security guards for secrecy and authentication com-municate with other agents by accessing their local stores.This abstraction simplifies proofs, but is not directly imple-mentable. Hence, we implement these guards at Level 2 byreceiving messages on channels with intrinsic security prop-erties. The associated refinement proof naturally gives riseto invariants stating that the receiving of channel messagesimplies the security guards they implement. These invariantsprecisely state the security properties guaranteed by the mes-

  • sages. For example, a message containing a key K receivedon a confidential channel to agent A may implement a guardauthorizing A to learn K. The corresponding invariant guar-antees that A is authorized to learn K from this message.

    Our contributions are threefold. First, we show how todevelop an entire family of key transport protocols from re-quirements down to protocols that are secure against a stan-dard Dolev-Yao intruder. We model realistic features thatare often abstracted away, such as replay prevention cachesfor timestamped messages to achieve strong properties likeinjective authentication. All models and proofs in this paperare formalized in the Isabelle/HOL theorem prover.

    Second, our development provides evidence that guard pro-tocols constitute a fundamental abstraction that bridges thegap between security properties and standard message-basedprotocol descriptions. In other approaches, the guaranteesabout protocol messages given by the invariants mentionedabove and the associated reasoning are usually stated infor-mally (if at all). By formalizing them, our approach fostersclear protocol designs and abstract, simple security proofs.Moreover, in message-based protocol descriptions, secrecyand authentication are often not clearly separated (e.g., whenusing a secure channel providing both properties) or are in-terdependent (e.g., due to a layered use of cryptographickeys and operations). This appears to be a major sourceof complexity and errors and makes security protocols hardto design and understand. In contrast, guard protocols re-alize secrecy and authentication properties abstractly, inde-pendently, and in a straightforward way. These features ofguard protocols facilitate the formal development of secureprotocols and underscore the central role they can play inproperty-driven design approaches.

    Third, our development shows that our method scales toprotocols of realistic complexity. We only need one exten-sion: dynamically created channels at abstraction level 2.The protocols in our development share both structure andproperties as the graph of refinements of our developmentindicates (see Figure 1). Property preservation through re-finements avoids proof duplication.

    2. PRELIMINARIES

    2.1 Isabelle/HOL and notationIsabelle is a generic, tactic-based theorem prover. We

    have used Isabelle’s implementation of higher-order logic, Is-abelle/HOL [30], for our developments. To enhance read-ability, we will use standard mathematical notation wherepossible and blur the distinction between types and sets.

    We use ≡ for definitional equality of terms and , for types.We define partial functions by A ⇀ B , A → B⊥, whereB⊥ , ⊥ | Some(B). We usually omit the constructor Some.The term f(x 7→ y) denotes the function that behaves like f ,except that it maps x to y. The term f [X] is the image of aset X under a function or binary relation f . The inductivetype of lists is defined by [A] , [] | A# [A], where [] is theempty list and a#l is the list formed by prefixing the elementa ∈ A to the list l ∈ [A]. We write [1, 2, 3] for 1#2#3#[].We define multisets over A by multiset(A) , A → N. Form ∈ multiset(A), the term m(e) indicates the multiplicityof e in m. Record types may be defined, such as point ,(| x ∈ N, y ∈ N |) with elements like r = (| x = 1, y = 2 |) andprojections r.x and r.y. The term r(| x := 3 |) denotes r,where x is updated to 3, i.e., (| x = 3, y = 2 |). The type

    cpoint , point + (| c ∈ color |) extends point with a colorfield. For record types T and U including fields F , we defineΠF ≡ {(r, s) ∈ T × U |

    ∧x∈F r.x = s.x}.

    2.2 Specifications and refinementA development by refinement starts from a set of system

    requirements and environment assumptions. We then incre-mentally construct a series of models resulting in a systemthat fulfills the requirements provided it runs in an envi-ronment satisfying the assumptions. The crucial point isthat requirements and assumptions, once established, arepreserved by subsequent refinements (Proposition 1). Wenow summarize our theory of refinement that we developedin Isabelle/HOL. It borrows elements from [1,2].

    Our models are (structured) transition systems of the formT = (Σ,Σ0,→). The state space Σ is a record type, i.e., a setof tuples of state variables, Σ0 ⊆ Σ are the initial states, andthe transition relation → is a finite union of parametrizedrelations, called events. Here is a prototypical event

    evt(x) ≡ {(s, s′) | G(x, s) ∧ s′.v := f(x, s) },

    where ·̄ denotes vectors. Events consist of a conjunction ofguards G(x, s) and an action s′.v := f(x, s) with updatefunctions f . The guards depend on the parameters x andthe current state s and determine when the event is en-abled. The action is syntactic sugar denoting the relations′ = s(| v := f(x, s) |), i.e., the simultaneous assignment ofvalues f(x, s) to the variables v in state s, yielding state s′.

    Our notion of refinement is based on standard simula-tion [26]. Proving a refinement requires the definition of asimulation relation R ⊆ Σa × Σc between the abstract andconcrete state spaces Σa and Σc. We must show that eachconcrete initial state is related to some abstract initial state,i.e., Σc,0 ⊆ R[Σa,0]. Furthermore, for each concrete eventevtc(x) (with guards Gc, state variables v, and update func-tions fc) we identify an abstract event evta(z) (with guardsGa, state variables u, and update functions fa) simulating it,i.e., R; evtc(x) ⊆ evta(w(x));R, where ‘;’ is relational compo-sition and w(x) are the witnesses that construct parametersfor evta from those of evtc. This condition decomposes intotwo proof obligations, called guard strengthening and actionrefinement, both under the premises (s, t) ∈ R and Gc(x, t).

    1. Ga(w(x), s) (GRD)

    2.(s(| u := fa(w(x), s) |), t(| v := fc(x, t) |)

    )∈ R (ACT)

    We assume that all models include a special event skip (theidentity relation), used to simulate new concrete events.

    To relate models with different state spaces, we equip themwith an observation function obs : Σ → O mapping statesin Σ to observations in O. The resulting specifications havethe form S = (T, obs), where T = (Σ,Σ0,→) is a transitionsystem. We include observations in refinement as follows.We say Sc = (Tc, obsc) refines Sa = (Ta, obsa) using thesimulation relation R ⊆ Σa × Σc and the mediator functionπ : Oc → Oa, written Sc vR,π Sa, if Tc simulates Ta asdefined above and R respects observations mediated by π, i.e.,obsa(s) = π(obsc(t)) for all (s, t) ∈ R. Sc refines Sa using π,written Sc vπ Sa, if there is an R such that Sc vR,π Sa.

    We consider two types of invariants: internal ones aresupersets of reach(S), the set of states reachable from ini-tial states, and external ones are supersets of oreach(S) ≡obs[reach(S)]. We use internal invariants to strengthen sim-ulation relations in refinement proofs. However, in general,only external invariants are preserved by refinement.

  • channel type dot notation channel message

    insecure A→ B : M Insec(M)secure (static) A •→•B : M Secure(A,B,M)

    authentic (dynamic) A •K→B : M dAuth(K,M)

    Table 2: Channel notation and messages

    Proposition 1. If Sc vπ Sa and oreach(Sa) ⊆ J thenπ[oreach(Sc)] ⊆ J .

    2.3 Security protocol refinement strategyWe summarize our four-level refinement strategy for secu-

    rity protocols [33] (cf. Table 1). We define a type of agents,agent. We assume a non-empty subset bad of dishonestagents, with complement good, and a server S ∈ good.

    Level 0 – Security properties. At the highest abstrac-tion level, we construct abstract, protocol-independent mod-els of security properties such as secrecy and authentication.We prove their desired properties as external invariants. Re-finements of these models therefore inherit these properties.

    Level 1 – Guard protocols. We introduce protocol rolesand their local states. We realize security properties usingsecurity guards. Some security guards are non-local in thatthey read relevant values (such as keys and nonces) directlyfrom their peer’s local state. There are neither communica-tion channels nor messages and there is no active intruder.

    Level 2 – Channel protocols. We model protocols us-ing communication channels with associated security prop-erties. For informal use, we adopt the notation of [25] (Ta-ble 2). We write A→ B for an insecure channel from agentA to agent B. The “: M” indicates that the message M issent on the channel. A static secure channel A •→•B pro-vides confidentiality to A (only B can read messages) andauthenticity to B (only A can send messages). On a dy-

    namic authentic channel A •K→B : M , A can authenticallytransmit messages to B provided they both know the key K.Such channels are created by dynamically generated (session)keys. We omit other channel types that are not needed here.

    We model channel messages by a type chmsg, with oneconstructor for each channel type. The last argument Mof each constructor is a non-cryptographic payload message.For example, Secure(A,B,M) denotes a secure message fromA for B. We also say that M is sent on a secure channel.

    The intruder can eavesdrop messages on insecure and au-thentic channels and on static secure channels with a dishon-est receiver. The set of these messages defines the intruderknowledge from which he can fake messages on insecure chan-nels, on dynamic authentic channels whose key he knows, andon secure channels with a dishonest endpoint.

    Level 3 – Cryptographic protocols. We model con-crete protocols and the Dolev-Yao intruder using a standardtheory of cryptographic messages due to Paulson [31]. Thetype of messages, msg, is defined inductively from agentsA ∈agent, nonces N ∈nonce, keys K∈key, pairs {|M1,M2 |},and encryptions {|M |}K . The term shr(A) denotes the sym-metric key that A shares with the server S. To formalizeprotocol properties and the intruder, we use the standard clo-sure operators parts, analz, and synth on sets of messages.The term parts(H) closes H under submessages, analz(H)closes H under submessages accessible using the keys in H,and synth(H) closes H under message compositions.

    3. REQUIREMENTS AND ASSUMPTIONSOur requirements and assumptions for (server-based) key

    transport protocols are as follows. The first three require-ments are mandatory and must be satisfied by all protocolswe consider. The last three requirements are optional.

    Requirement R1 (Key distribution). The server gener-ates and distributes a fresh session key to an initiator anda responder.

    Requirement R2 (Key secrecy). Only authorized agentsmay learn a session key, unless one of them is dishonestwhereby other dishonest agents may also learn it.

    The next two requirements cover authentication proper-ties. These are formalized in the next section as injective ornon-injective agreements [23].

    Requirement R3 (Server authentication). The initiatorand the responder each authenticate the server on the ses-sion key and possibly on additional data.

    Requirement R4 (Key confirmation). The initiator andthe responder authenticate each other on the session keyand possibly on additional data, thereby confirming toeach other their key knowledge.

    Two additional (and independent) requirements concernthe freshness and recentness of the session key. A key isfresh if it is only used in a single session and is recent if itslifetime does not exceed a specified limit.

    Requirement R5 (Key freshness). The initiator and re-sponder obtain assurance that the session key is fresh.

    Requirement R6 (Key recentness). The initiator and re-sponder obtain assurance that the session key is recent.

    We assume a standard Dolev-Yao intruder that we iden-tify as usual with the communication network. Moreover,we make two assumptions about agent corruption and thecryptographic setup.

    Assumption A1 (Dolev-Yao intruder). The intruder con-trols the network. He receives all messages sent andhe can build (synth) and send messages composed fromparts obtained by decomposing (analz) received messagesusing the cryptographic keys he knows.

    Assumption A2 (Static corruption). An arbitrary fixedsubset of agents is corrupted, whereby their long-termkeys are exposed to the intruder.

    Assumption A3 (Cryptographic setup). Requisite cryp-tographic keys are distributed prior to protocol execution.

    4. DEVELOPMENT OVERVIEWWe concretize our refinement strategy for deriving differ-

    ent server-based key establishment protocols: the Needham-Schroeder Shared-Key (NSSK) protocol [28], the Denning-Sacco protocol [21], and core versions of Kerberos 4 [34] andKerberos 5 [29]. Figure 1 summarizes our development: eachnode represents a model and each arc m → m′ represents arefinement m vπ m′ for a given mediator function π (notshown). The superscripts refer to the requirements estab-lished, where i and r denote the initiator and responder.

    At Level 0, we have abstract models of secrecy (s0 ) andauthentication (a0i , a0n). At Level 1, our first guard proto-col, kt1 , abstractly models server-based secret key transport

  • s0a0i

    a0nL0: Security Properties

    L1: Guard Protocols

    L2: Channel Protocols

    L3: Crypto Protocols

    kt11,2

    kt1nn3

    ds2

    ds3d ds3

    nssk14,5r

    nssk2

    nssk3d nssk3

    krb14,5r,6

    krb2

    krb3d krb3ivkrb3v

    kt1in3,5i

    ds16

    Figure 1: Refinement graph

    (R1, R2). It refines the secrecy model s0 and is an an-cestor of all key transport protocols that we have derived.The model kt1 provides no guarantee of the session key’s au-thenticity, freshness, or recentness. Hence, we refine kt1 intofurther guard protocols that establish authentication prop-erties (R3, R4) and use freshness identifiers [22], namely,nonces or timestamps, to prevent replays and guarantee keyfreshness and recentness (R5, R6). We do this in two stages.

    In the first stage, we refine kt1 into two different modelsthat realize server authentication (R3). In the first model,kt1in, the initiator injectively agrees with the server on thesession key, while the responder non-injectively agrees withthe server. The injective agreement and secrecy entail keyfreshness (R5) for the initiator. In the second model, kt1nn,both initiator and responder achieve non-injective agreementswith the server. The “additional data” agreed upon is a pa-rameter in these models. We establish each authenticationproperty by refining an L0 model, a0n or a0i , using a differ-ent mediator function. This explains the existence of multi-ple paths between some pairs of models in Figure 1.

    In the second stage, we refine the model kt1in into nssk1and krb1 and establish key confirmation (R4). We achievethis by adding protocol steps and proving mutual agreementbetween the initiator and the responder on the key and otherdata. In nssk1 , these agreements are injective due to the useof nonces. In krb1 , we use timestamps to ensure key recent-ness (R6). A replay prevention cache allows the responder toobtain an injective agreement with the initiator. Key fresh-ness (R5) for the responder relies on both authentication andsecrecy properties. Finally, we also refine the model kt1nninto ds1 using timestamps to obtain key recentness. At thispoint, all requirements are established. The remaining twolevels realize the environment assumptions (A1)-(A3), mak-ing the protocol fit for a hostile distributed environment.

    At Level 2, we construct the three channel-based mod-els nssk2 , krb2 , and ds2 , where the roles exchange channelmessages instead of reading each other’s memory. The serverdistributes the session key on static secure channels to theinitiator and the responder and key confirmation is realizedin nssk2 and krb2 using dynamic authentic channels pro-tected by the session key.

    At Level 3, we replace the channel messages by crypto-graphic messages on an insecure channel. We implement thestatic secure channels by symmetric encryption with long-term keys and the dynamic authentic channels by encryption

    protocol model R1 R2 R3 R4 R5 R6

    NSSK nssk3 X X i/n i/i X -

    Kerberos 4 krb3iv X X i/n n/i X X

    Kerberos 5 krb3v X X i/n n/i X X

    Denning-S. ds3 X X n/n - - X

    Table 3: L3 models and their properties

    with the session key. The models at this level differ in theirhandling of the ciphertext containing responder’s session key(called a ticket). In nssk3d , ds3d , and krb3d , the server sendsthe ticket directly to the responder. In the other models, thecommunication topology changes: the server sends the ticketto the initiator who forwards it to the responder. While inkrb3v the ticket is sent alongside the ciphertext containingthe initiator’s session key, it appears inside that ciphertextin the models nssk3 , ds3 , and krb3iv .

    Table 3 summarizes the requirements achieved by the fi-nal protocols. In the columns for the authentication require-ments R3 and R4, ‘i’ means injective and ‘n’ means non-injective agreement. The slash separates initiator and re-sponder guarantees. The models with names ending in ‘d’achieve the same properties as their listed siblings.

    Each of the following sections presents a general methodol-ogy for modeling and reasoning at the respective abstractionlevel and develops the concrete models at that level. We fo-cus on the models typeset in boldface in Figure 1 leading tothe core versions of Kerberos.

    5. SECURITY PROPERTIES (L0)We present our abstract models of secrecy and entity au-

    thentication. These models are protocol-independent andconstitute Level 0 of our refinement strategy. Each protocoldevelopment starts with the formalization of its security re-quirements. This is achieved by appropriately instantiatingthe Level 0 models. We will later show that our guard proto-col models at Level 1 refine these instantiated models, thusestablishing the respective requirements (by Proposition 1).

    5.1 SecrecyWe introduce two state variables: kn and az, both re-

    lations between data (of polymorphic type δ) and agents,where (d,A) ∈ s.kn means that agent A knows data d instate s, and (d,A) ∈ s.az means that agent A is authorizedto know data d in state s. The entire state is observable.

    Σs0 (δ) , (| kn ∈ P(δ × agent), az ∈ P(δ × agent) |)Secrecy can be expressed as the following property which

    states that all knowledge is authorized.

    secrecy ≡ {s | s.kn ⊆ s.az}The model s0 has one event for secret generation and one

    for secret learning. The former is parametrized by the datad, an agent A, and the intended group G of agents sharing d.

    gens0 (d,A,G) ≡ {(s, s′) |d /∈ dom(s.kn) ∧ A ∈ G∧s′.kn := s.kn ∪ {(d,A)} ∧s′.az := s.az ∪ {(d,B) | B ∈ G ∨ G ∩ bad 6= ∅} }

    The guards require that d is fresh and that A belongs to thegroup G. The first action adds the pair (d,A) to the knowl-edge in s′.kn. The second action updates the authorization

  • relation with {d} ×G if all agents in G are honest and with{d} × agent otherwise. That is, if the group G contains adishonest member, there is no point in restricting access to d.

    In the secret-learning event, an agent B learns the secret d,provided that B is authorized to learn d.

    learns0 (d,B) ≡ {(s, s′) |(d,B) ∈ s.az ∧ s′.kn := s.kn ∪ {(d,B)} }

    From a secrecy perspective, it is irrelevant from whom Blearns d. Authentication aspects will be covered separately.The model s0 as defined clearly preserves secrecy.

    Proposition 2. oreach(s0 ) ⊆ secrecy .

    The instantiation of the polymorphic type of data of themodel s0 to keys provides an abstract model of key distri-bution and key secrecy. Refining this model will establishrequirements R1 and R2.

    5.2 AuthenticationWe formulate two models, a0n and a0i , that represent a

    minimal, extensional variant of Lowe’s injective and non-injective agreement [23] using signals, which indicate partic-ular stages of each role’s progress (e.g., termination). Thestate record has as its single field a multiset of signals, sigs.

    signal(δ) , Running(list(agent)× δ)| Commit(list(agent)× δ)

    Σa0 (δ) , (| sigs ∈ multiset(signal(δ)) |)

    There are two signals: Running(h, d) and Commit(h, d), whereh is a list of agents and d is some data of polymorphic typeδ instantiated later. The agreement on the data d will holdif the agents in h are honest. By convention, the agreementis between the first two agents in h.

    Non-injective agreement states that if the agents in h arehonest and there is a Commit(h, d) signal, then there is amatching Running(h, d) signal.

    niagreea0n ≡ {s | ∀h, d. h ⊆ good ∧s.sigs(Commit(h, d)) > 0→ s.sigs(Running(h, d)) > 0 }

    Injective agreement strengthens this by requiring that thenumber of Commit(h, d) signals is not greater than the num-ber of matching Running(h, d) signals.

    iagreea0i ≡ {s | ∀h, d. h ⊆ good→ s.sigs(Commit(h, d)) ≤ s.sigs(Running(h, d)) }

    The models a0n and a0i have two events, running(h, d)and commit(h, d), which add the corresponding signal to themultiset s.sigs. A guard in the commit(h, d) event ensuresthat adding a Commit(h, d) signal to s.sigs preserves the re-spective property above. The guard in commita0n(h, d) re-quires the existence of a running signal if the agents in h arehonest. In commita0i(h, d), this is strengthened as follows.

    h ⊆ good→ s.sigs(Commit(h, d)) < s.sigs(Running(h, d))

    It is easy to see that a0i refines a0n.

    Proposition 3. We have (i) oreach(a0n) ⊆ niagreea0n ,(ii) oreach(a0i) ⊆ iagreea0i , and (iii) a0i vid,id a0n.

    We formalize requirements R3 and R4. For this purpose,we must specify the data to be agreed upon. We use authenti-cation graphs to represent this information visually. Figure 2displays the authentication graph of the Kerberos protocols.

    A: Na, Ta B: Rb

    S: Kab, Ts

    (Kab, B, Na, Ts) (Kab, A, Ts)

    (Kab, Ts, Ta)

    (Kab, Ts, Ta)

    R3

    R4

    R3R4

    Figure 2: Authentication graph for Kerberos

    In these graphs, there is one node for each protocol role.Each node is labeled by an agent name (in the given role),possibly followed by a list of identifiers generated during therole’s execution. For example, the server S generates thesession key Kab and a timestamp Ts, and the initiator gen-erates a nonce Na and a timestamp Ta. Each arrow specifiesone agreement property by defining the parameters h and dfor the running and commit events of models a0n or a0i .The arrow endpoints define the agents h, whose honesty isassumed, and the tuples labeling the arrows specify the datad to be agreed upon between these agents. An arrow tailindicates an injective agreement. The boldface labels in-dicate the requirements that are established for the agentnear the arrow head. For example, the arrow from S to Blabeled by (Kab, A,Ts) means that the responder B non-injectively agrees with the server on (Kab, A,Ts), assumingthe honesty of S and B (R3). The arrow from A to B labeledby (Kab,Ts,Ta) means that B injectively agrees with A onKab, Ts, and Ta, assuming the honesty of A and B (R4).

    To prove that an L1 model establishes an agreement ofrole R with role S, we identify an event of role R that refinescommit and an event of role S that refines running . All otherevents must refine skip. Each agreement requires a differentmapping of the protocol events to the running and commitevents and therefore requires a separate refinement of themodel a0i or a0n (cf. Figure 1).

    6. GUARD PROTOCOLS (L1)At Level 1, we introduce guard protocols. L1 models have

    a single state variable, runs, which models threads executingprotocol roles. This variable maps run identifiers to a run’slocal store consisting of a pair of protocol participants and aframe recording role-specific information.

    runsT , rid ⇀ agent× agent× frameΣkt1 , (| runs ∈ runsT |)

    In our case of server-based key transport protocols, thereare three roles: a key-generating server and key-receivinginitiators and responders. Accordingly, the type frame isdefined with three constructors.

    frame , Init(key⊥, [N]) | Resp(key⊥, [N]) | Serv([N])

    The initiator and the responder record the session key. Eachrole may store a list of freshness identifiers (nonces or times-tamps). We exploit the uniqueness of run identifiers to modelfresh nonces and keys. Here, we will use server run identi-fiers as session keys. In later models, we also use client runidentifiers as nonces. The entire state is observable.

    Each event executes a protocol step of a run by an agentin a particular role. The sequencing of events within a role isdetermined by local guards reading a run’s local store. More-over, agents communicate by reading their peers’ memories.This is achieved by non-local guards that refer to another

  • A: Na B: RbS: KabKab Kab

    Figure 3: Basic secret key distribution (kt1)

    run’s local store. These guards compare local and remote val-ues and read new remote values. We have two types of suchguards: authorization guards for secrecy preservation and au-thentication guards for agreements. Besides these non-localsecurity guards, there are also local ones, e.g., for check-ing the validity of timestamps to achieve recentness. Thesequencing imposed by the local and non-local guards deter-mines a temporal (partial) order on the agreements specifiedin the authentication graph (cf. Figure 2).

    The secrecy and authentication properties are establishedby refining the secrecy and authentication models from Sec-tion 5. In each case, we establish a refinement by recon-structing the abstract state (i.e., knowledge and authoriza-tion relations or signals) from the concrete one (i.e., the runs)and by identifying a pair of concrete events that refine theabstract ones (i.e., secret generation/learning for secrecy andrunning/commit for authentication).

    6.1 Secret key distributionA sequence chart of our first abstract key transport proto-

    col model, kt1 , appears in Figure 3. The three main eventsof this model establish R1 and R2. These are (1) the gener-ation of the session key Kab by the server S, represented bythe role label S : Kab, and the secret acquisition of this keyby (2) the initiator A and (3) the responder B, representedby arrows from S to A and B labeled by Kab. These arrowsdo not represent communication of messages, since there areno messages or channels at this stage.

    We establish session key secrecy by refining the model s0 .We define relations knC(r) and azC(r), which reconstructthe knowledge and authorization relations, kn and az, of s0from the runs r ∈ runsT . The simulation relation R01 isπ−101 , where π01 is the mediator function defined as follows.

    π01(t) ≡ (| kn = knC(t.runs), az = azC(t.runs) |)

    This is a general pattern for establishing secrecy properties.Only the definitions of knC(r) and azC(r) depend on theprotocol-specific definition of frames.

    Here, we define the relation knC(r) by four rules. We givetwo examples describing the initiator and server’s session keyknowledge. There is a similar rule for the responder.

    r(N) = (A,B, Init(K,ns))

    (K,A) ∈ knC(r)r(K) = (A,B, Serv(ns))

    (K,S) ∈ knC(r)

    An additional rule states that knC0 ⊆ knC(r), where therelation knC0 describes an initial key setup. For now knC0is arbitrary, but we will later define it concretely. We call thekeys in knC0’s domain static or long-term keys. Corruptedkeys are static keys known by some bad agent.

    staticKey ≡ dom(knC0) corrKey ≡ knC−10 [bad]

    The following rule defines who is authorized to learn a keyK that the server S generated for A and B, namely A, B,and S if A and B are honest and everyone otherwise.

    r(K) = (A,B, Serv(ns))C ∈ {A,B, S}∨A ∈ bad ∨B ∈ bad

    (K,C) ∈ azC(r)(1)

    Two additional rules state that knC0 ⊆ azC(r) and thatanyone is authorized to learn corrupted keys.

    The specification kt1 has five events, each modeling a stepof the protocol. The first event creates a new run (and nonce)Na of initiator A with responder B by updating runs with(Na 7→ (A,B, Init(⊥, [])). The second event creates a respon-der run analogously. These two events refine skip. In thethird event, the server generates a fresh session key Kab andrecords it together with the agent names A and B in runs.This event refines gens0 (Kab,S, [S, A,B]).

    step3kt1 (A,B,Kab) ≡ {(s, s′) | -- S, refines gens0Kab /∈ dom(s.runs) ∪ staticKey ∧ -- fresh keys′.runs := s.runs(Kab 7→ (A,B, Serv([]))) }

    A session key Kab is fresh if it is neither a run identifier (andthus potentially an old session key) nor a static key.

    The final two events model the confidential acquisition ofthe session key by the initiator and the responder. They bothrefine the event learns0 . In Step 5, the responder B acquiresthe session key Kab. The initiator’s step4kt1 is analogous.

    step5kt1 (A,B,Rb,Kab) ≡ {(s, s′) | --B,refines learns0s.runs(Rb) = (A,B,Resp(⊥, [])) ∧ -- B’s run(Kab, B) ∈ azC(s.runs) ∧ -- check authorizations′.runs := s.runs(Rb 7→ (A,B,Resp(Kab, []))) }

    The first guard requires that Rb identifies a run of responderB with initiator A, where B has not yet received a key. Theaction updates the responder run with the session key Kab.

    The second guard is an authorization guard requiring thatB is authorized to learn Kab. There are two cases accordingto the definition of azC. The first case, described by rule (1),corresponds to reading a (session) key Kab from the server,who determined the authorization to access the key. Notethat there is no guarantee that the key was generated forB. In the second case, Kab is a static key, which may becorrupted. For now, there are no further constraints on Kab.The authorization guard is sufficient to preserve the secrecyof Kab. In Section 6.2, we will establish authentication prop-erties to ensure that honest agents only accept session keysgenerated for them and shared with the intended partner.

    Using the previously defined simulation relation R01, weshow that the model kt1 defined above refines the secrecymodel s0 . The guard strengthening in the refinement of theevent gens0 by step3kt1 requires an invariant, keykt1 , statingthat a fresh key Kab is neither in the domain of knC(s.runs)nor azC(s.runs). We can now prove:

    Proposition 4. Let R′01 = R01 ∩ Σs0 × keykt1 . Thenreach(kt1 ) ⊆ keykt1 and kt1 vR′01,π01 s0 .

    Since the abstract variables kn and az are observable andreconstructable from the concrete state, the secrecy invariantfor s0 (Proposition 2) is inherited by kt1 (Proposition 1),which thus realizes secret key distribution (R1, R2).

    6.2 Server authenticationIn the next model, kt1in, we establish agreements of the

    initiator and the responder with the server on the sessionkey and additional data. The latter data is a parameter ofkt1in. However, for the sake of the presentation, we willfocus our subsequent discussion on the instantiation of kt1infor the Kerberos development. Figure 4 shows a sequencechart for this model. The models kt1 and kt1in have identicalstate spaces, which are entirely observable. The labels below

  • A: Na B: RbS: Kab, Ts

    Kab, B, Na, Ts

    Kab

    Kab, A, Ts

    Kab

    Figure 4: Adding server authentication (kt1in)

    the arrows denote agreements and an arrow tail indicates aninjective agreement as specified in Figure 2: The initiator Aachieves (R3, R5) by an injective agreement with the serveron (Kab, B,Na,Ts), and the responder B establishes (R3)by a non-injective agreement with the server on (Kab, A,Ts).Here, Na is the (unique) identifier of A’s run, which we useas a nonce, and Ts is a freshness identifier that we will laterrefine into a timestamp (i.e., a clock reading). This modelrefines kt1 , a0i , and a0n (cf. Figure 1).

    We obtain the model kt1in by modifying kt1 in two ways.First, we introduce new parameters and corresponding stateupdates to reflect that both partners of the agreement knowthe data being agreed upon. In the server’s Step 3, we adda nonce Na and the freshness identifier Ts to the parame-ters and record these in the state, i.e., the runs are updatedwith Kab 7→ (A,B, Serv([Na,Ts])). Neither Na nor Ts areconstrained by any guards, which reflects their unreliableorigin. In the responder’s Step 5, we add the parameter Tsand update the runs with Rb 7→ (A,B,Resp(Kab, [Ts])) andsimilarly in the initiator’s Step 4.

    Second, we realize the agreements described above by add-ing authentication guards to the key-receiving Steps 4 and 5.These guards may either be added directly to the respectiveevents or discovered during the refinement proof of the modela0n or a0i . Here, we describe guard discovery and thereforefirst introduce the simulation relation. We focus on the re-finement of a0n. The refinement of a0i is similar.

    Similar to Section 6.1, the general idea for the refinementsof a0i and a0n is to reconstruct the abstract state from theconcrete one. Here, this means reconstructing a signal multi-set from the runs. The simulation relation is Rrs01 ≡ (πrs01)−1,where the mediator function πrs01 is defined as follows.

    πrs01(t) ≡ (| sigs = sigsCrs(t.runs) |)

    In general, for an agreement of agent A in role R with Bin role S on data d, the multiset sigsC(t.runs) defines themultiplicity of the Commit([A,B], d) and Running([A,B], d)signals as the number of runs of A and B in the respectiveroles that have reached a state where the data d is known.This pattern is common to all authentication proofs, onlythe definition of sigsC depends on the protocol.

    In our case, the multiset sigsCrs(r) = mr contains aCommit signal for each completed responder run. This isreflected in the following definition.

    mr(Commit([B,S], (Kab, A,Ts))) = |RR|where RR = {Rb | ∃nl. r(Rb) = (A,B,Resp(Kab,Ts#nl))}

    Similarly, completed server runs give rise to Running sig-nals. Since server run identifiers are keys Kab included inthe agreement, a simpler definition suffices.

    mr(Running([B,S], (Kab, A,Ts))) =if ∃Na, nl. r(Kab) = (A,B, Serv(Na#Ts#nl)) then 1 else 0

    The existential quantifications on nl account for extensionsof the list of identifiers in later refinements. Finally, we setmr(x) = 0 at all other points x.

    A: Na,Ta B: RbS: Kab, Ts

    Kab, B, Na, Ts

    Kab

    Kab, A, Ts

    Kab

    Kab, Ts, Ta

    Kab, Ts, Ta

    Figure 5: Adding key confirmation (krb1)

    Next, we prove that the server’s event step3kt1in refinesthe abstract event runninga0n and that the responder’s eventstep5kt1in(A,B,Rb,Kab,Ts) refines the abstract a0n eventcommita0n([B,S], (Kab, A,Ts)). The remaining events refineskip. In the proof of guard refinement (GRD) for step5kt1in ,we get stuck in a proof state that directly suggests the fol-lowing authentication guard for this event.

    B /∈ bad→ ∃Na, nl.s.runs(Kab) = (A,B, Serv(Na#Ts#nl))(2)

    This guard guarantees to an honest responder B that theserver has reached a state corresponding to the matchingRunning([B,S], (Kab, A,Ts)) signal. After adding this guard,the proof succeeds.

    For the refinement of a0i , we similarly discover the authen-tication guard for step4kt1in(A,B,Na,Kab,Ts) in the proofthat this event refines commita0i([A,S], (Kab, B,Na,Ts)).

    A /∈ bad→ ∃nl. s.runs(Kab) = (A,B, Serv(Na#Ts#nl))(3)

    Compared to (2), the absence of the existential quantifica-tion on Na reflects that this agreement includes Na.

    Proposition 5. We have kt1in vπis01 a0i for the initiatorand kt1in vπrs01 a0n for the responder.

    Finally, it is easy to see that kt1in refines kt1 . The medi-ator function π1in1 maps the lists of freshness identifiers inthe run frames of kt1in to the empty list.

    Proposition 6. kt1 in vπ−11in1,π1in1

    kt1 .

    6.3 Key confirmationNext, we extend the model kt1in to an abstract model of

    Kerberos (Figure 5), which achieves key confirmation (R4),key freshness for the responder (R5), and key recentness(R6). For key recentness, the initiator and responder checkthe validity of a timestamp Ts that the server associates withthe session key Kab. For key confirmation, the initiator andresponder mutually agree on the session key Kab, its associ-ated timestamp Ts, and an initiator timestamp Ta (cf. Fig-ure 2). The responder caches keys Kab and timestamps Tato obtain an injective agreement with the initiator. Notethat the sequence chart in Figure 5 contains all agreementsspecified in Figure 2 and (partially) orders them causally.

    We extend the state of kt1 with two additional variables.

    Σkrb1 , Σkt1 + (|clk ∈ time, cch ∈ P(agent×key×time) |)

    The variable clk models a discrete-time clock. We introducea tick(T ) event, which increments the clock by T time units.All other events are assumed to take no time and hence donot modify the clock. The variable cch represents a cachestoring triples (B,Kab,Ta) consisting of an agent name B, asession key Kab, and an initiator timestamp Ta. For replayprotection, a responder B checks the cache before accepting a

  • key Kab with timestamp Ta. A new purge(B) event removesfrom B’s cache the entries whose timestamps have expired.

    The events for Steps 1 to 5 of the model krb1 are derivedfrom the corresponding kt1in events, possibly adding guardsand actions. In Step 3, we turn Ts into a timestamp byadding the guard Ts = s.clk.

    In the initiator’s Step 4, we add the initiator timestampTa as a parameter and record it in the state, and we add twotime-related guards. We assume arbitrary fixed lifetimes Lsand La for server and initiator timestamps.

    step4krb1 (A,B,Na,Kab,Ts,Ta) ≡ {(s, s′) | -- by A... -- guards of step4kt1in (omitted)Ta = s.clk ∧ -- get timestamps.clk < Ts + Ls ∧ -- chk validity of Tss′.runs := s.runs(Na 7→ (A,B, Init(Kab, [Ts,Ta]))) }

    The first guard generates a timestamp Ta. The second guardensures the validity of the server timestamp Ts.

    In the responder’s Step 5, we also add Ta as a parameterand record it in the state. Furthermore, we introduce fournew guards and a new action.

    step5krb1 (A,B,Rb,Kab,Ts,Ta) ≡ {(s, s′) | -- by B... -- guards of step5kt1in (omitted)(A /∈ bad ∧B /∈ bad→ -- agree with A∃Na, nl. s.runs(Na) = (A,B, Init(Kab,Ts#Ta#nl))) ∧

    (B,Kab,Ta) /∈ s.cch ∧ -- replay protections.clk < Ta + La ∧ -- chk validity of Tas.clk < Ts + Ls ∧ -- chk validity of Tss′.cch := s.cch ∪ {(B,Kab,Ta)} -- cch updates′.runs := s.runs(Rb 7→ (A,B,Resp(Kab, [Ts,Ta]))) }

    The first guard ensures agreement with the initiator on thedata (Kab,Ts,Ta). The second guard achieves injectivityfor the responder by checking that B has not previously seenKab with timestamp Ta. The last two guards ensure recent-ness by checking the validity of Ta and Ts. The new actionrecords (B,Kab,Ta) in the cache to avoid future replays.

    Finally, we add a new Step 6 to the initiator, which uses anauthentication guard to achieve agreement with a responderrun Rb on Kab, Ts, and Ta. We add an arbitrary value,here 0, to the frame to mark the initiator run’s termination.

    step6krb1 (A,B,Na,Kab,Ts,Ta) ≡ {(s, s′) | -- by As.runs(Na) = (A,B, Init(Kab, [Ts,Ta])) ∧-- for agreement with B on (Kab,Ts,Ta)A /∈ bad ∧B /∈ bad→(∃Rb. s.runs(Rb) = (A,B,Resp(Kab, [Ts,Ta])))s′.runs := s.runs(Na 7→ (A,B, Init(Kab, [Ts,Ta, 0]))) }

    The mediator function π11 in the refinement of kt1in bykrb1 drops the timestamps Ta and the termination marker 0from initiator and responder frames.

    Proposition 7. krb1 vπ−111 ,π11

    kt1in.

    The mediators πir01 and πri01 and associated simulation rela-

    tions for refining a0i and a0n are defined analogously to Sec-tion 6.2. The authentication guards can be defined or discov-ered as described in that section. The replay cache guaran-tees injective agreement with the initiator to the responder,while the initiator obtains only a non-injective agreementwith the responder. The proof of injectivity in the refine-ment of commita0i by step5krb1 requires an invariant statingthat a responder B knowing a key Kab and a timestamp Tahas an entry (B,Kab,Ta) in the replay cache during Ta’svalidity. Appendix A provides some proof details.

    A: Na B: RbS: Kab

    M2a. Kab, B, Ts, Na M2b. Kab, A, Ts

    M1. A, B, Na

    M3. Kab: A, Ta

    M4. Kab: Ta

    Figure 6: Channel-based Kerberos protocol (krb2)

    Proposition 8. (i) krb1 vπir01 a0n for the initiator and(ii) krb1 vπri01 a0i for the responder.

    Finally, in Appendix B, we formalize key freshness (R5)for the responder and give a brief outline of the proof that itis an invariant of krb1 .

    7. CHANNEL PROTOCOLS (L2)At Level 2, the roles exchange channel messages instead of

    reading information from each other’s memory. We thereforeextend the state with a field chan for the set of channel mes-sages of type chmsg. All fields except chan are observable.

    Σkrb2 , Σkrb1 + (| chan ∈ P(chmsg) |)

    For the refinement of krb1 , we use the simulation relationR12 ≡ Πruns,clk,cch with the identity mediator function. Inthe protocol events, we add message-receiving guards, whichreplace the authorization and authentication guards fromLevel 1 (if any), and actions for sending channel messages.Local guards remain the same. Moreover, we introduce anactive intruder, who extracts nonces and keys from chan-nel messages and uses them to fake new channel messages.Accordingly, we define three functions ink(H), ikk(H), andfakeable(H) returning the sets of nonces, keys, and fakeablemessages derivable from a set of channel messages H. Theintruder event closes chan under fakeable messages.

    fakekrb2 ≡ {(s, s′) | s′.chan := fakeable(s.chan) }

    The new fake event refines skip, as it only modifies chan,while all other events refine their counterpart in krb1 . AllLevel 2 protocols follow this general setup, only the defini-tions of the type chmsg and the intruder functions ikk, ink,and fakeable are protocol-dependent.

    The channel-based refinement krb2 of krb1 is shown in Fig-ure 6. The protocol is now started by the initiator sendingA,B together with his nonce Na to the server. The serveruses static secure channels to send the session key Kab, thename B, the timestamp Ts, and the nonce Na to the initia-tor A and Kab, A,Ts to the responder B. The responder Bobtains key confirmation from A by receiving A,Ta on a dy-namic authentic channel protected by the session key Kab(and similarly for A’s guarantee in the other direction). Noconfidentiality is required in the key confirmation phase.

    For krb2 , we define the type of channel messages as follows.

    chmsg , Insec(imsg) | Secure(agent× smsg)| dAuth(key × amsg)

    We drop the first agent name from secure messages, since itis always the honest server S. The data types imsg, smsg,and amsg model the payload for each channel type. Forexample, we define the secure messages as follows.

  • smsg , M2a(key × agent× time× nonce)| M2b(key × agent× time)

    As example events, we describe the changes in Steps 3and 5. In the server’s Step 3, we add a new guard for receiv-ing message M1, which does not replace any previous guards,and an action for sending messages M2a and M2b.

    Insec(M1(A,B,Na)) ∈ s.chan -- receive M1s′.chan := s.chan ∪ {Secure(A,M2a(Kab, B,Ts,Na)),

    Secure(B,M2b(Kab, A,Ts))}In Step 5, the responder B receives message M2b from

    the server and M3 from the initiator A and sends messageM4. Here, the message-receiving guards replace the previousauthorization and authentication guards.

    Secure(B,M2b(Kab, A,Ts)) ∈ s.chandAuth(Kab,M3(A,Ta)) ∈ s.chans′.chan := s.chan ∪ {dAuth(Kab,M4(Ta))}We fix the intruder’s capabilities by the functions ikk, ink,

    and fakeable. Here is an example of part of the definition ofikk(H) (intruder key knowledge) stating that the intruderlearns the keys in secure messages sent to dishonest agents.

    Secure(C,M) ∈ H K ∈ keysOf (M) C ∈ badK ∈ ikk(H)

    As another example, the following rule states that the in-truder can fake message M2a on a static secure channel to adishonest C, if he knows the key K and the nonce N .

    C ∈ bad K ∈ ikk(H) N ∈ ink(H)Secure(C,M2a(K,B, T,N)) ∈ fakeable(H)

    The refinement proof requires several invariants. Manyof these are directly suggested by the guard strengtheningproof obligations stating that the message-receiving guards(L2) imply the security guards (L1). This is one of the mainbenefits of using guard protocols as a link between the prop-erties and the message-based protocols. Appendix C containsa proof sketch of guard strengthening for Step 5.

    Proposition 9. Let Ikrb2 be the intersection of the in-variants of krb2 and let R′12 = R12 ∩ Σkrb1 × Ikrb2 . Thenreach(krb2 ) ⊆ Ikrb2 and krb2 vR′12,id krb1 .

    8. CRYPTOGRAPHIC PROTOCOLS (L3)At Level 3, we replace the set of channel messages, chan,

    by a set of cryptographic messages, IK (for intruder knowl-edge). All fields except IK are observable.

    Σkrb3v , Σkrb1 + (| IK ∈ P(msg) |)

    The refinement is based on a message abstraction func-tion absMsg ∈ P(msg) → P(chmsg), which abstracts setsof cryptographic messages to sets of channel messages. Thesimulation relation R23 with krb2 is defined as the intersec-tion of the following three relations with Πruns,clk,cch, wherenonces(H) and keys(H) denote the nonces and keys knownto the intruder, i.e., those contained in analz(H).

    Rmsgs23 ≡ {(s, t) | absMsg(parts(t.IK)) ⊆ s.chan}Rkeys23 ≡ {(s, t) | keys(t.IK) ⊆ ikk(s.chan)}Rnon23 ≡ {(s, t) | nonces(t.IK ) ⊆ ink(s.chan)}

    We refine the channel-based intruder (event fakekrb2 ) intoa protocol-independent standard Dolev-Yao intruder (A1),who closes IK under derivable messages (cf. [31]).

    fakekrb3v ≡ {(s, s′) | s′.IK := synth(analz(s.IK )) }

    A: Na B: RbS: KabM1. A, B, Na

    M3. {| A, Ta |}K, {| Kab, A, Ts |}shr(B)

    M4. {| Ta |}K

    M2.

    M2 = {| Kab, B, Ts, Na |}shr(A),{| Kab, A, Ts |}shr(B)krb3v:M2 = {| Kab, B, Ts, Na, {| Kab, A, Ts |}shr(B) |}shr(A)krb3iv:

    Figure 7: Cryptographic core Kerberos protocols

    This setup covers all L3 protocols we have derived so far. Atthis level, only the definition of absMsg is protocol-dependent.

    In the protocol events, we replace the channel messages bycryptographic ones. In general, there are a number of alter-native realizations using different cryptographic operations.

    Figure 7 shows the core of Kerberos 4 [34] and Kerberos 5[29]. Each agent A shares a long-term symmetric key shr(A)with the key server S. Hence, we now concretize the previ-ously arbitrary initial key setup relation knC0 by definingknC0 ≡ {(shr(A), C) | C ∈ {A,S}} (A3). We implementthe static secure channels by encryption with the long-termkeys and we refine the dynamic authentic channels into en-cryptions with session keys. (In the Dolev-Yao model, sym-metric encryption is also provides authenticity.)

    We also modify the communication topology of the channel-based model: The initiator now relays the responder’s ticketfrom the server. While in Kerberos 5 the ticket is sent along-side the ciphertext containing the initiator’s session key, itappears inside that ciphertext in Kerberos 4. We now presentour models of Kerberos 5 and Kerberos 4, krb3v and krb3iv .

    8.1 Kerberos 5 protocol (L3)We focus on the refinement of Steps 3–5, which also reflect

    the modified communication topology. In Step 3, the serversends the message M2 by adding it to IK . This message con-sists of a pair of ciphertexts and replaces the messages M2aand M2b in krb2 . In Step 4, the initiator A receives M2 andforwards its second component along with the authenticator{|A,Ta |}Kab , which proves A’s knowledge of Kab.

    {| {|Kab, B,Ts,Na |}shr(A), X |} ∈ s.IK -- recv M2

    s′.IK := s.IK ∪ { {| {|A,Ta |}Kab , X |} } -- send M3

    The responder receives the two-component message M3 inStep 5 and sends back the confirmation message M4.

    {| {|A,Ta |}Kab , {|Kab, A,Ts |}shr(B) |} ∈ s.IK -- recv M3

    s′.IK := s.IK ∪ { {| Ta |}Kab } -- send M4

    The definition of the message abstraction function, absMsg ,is straightforward. It abstracts the components of messagesM2 and M3 separately. For instance, here are the rules defin-ing the abstraction of tickets and authenticators.

    {|K,A, T |}shr(B)∈ HSecure(B,M2b(K,A, T )) ∈ absMsg(H)

    {|A, T |}K∈ HdAuth(K,M3(A, T )) ∈ absMsg(H)

    The refinement proof requires only four simple additionalinvariants: Two concerning key definedness and two stating

  • that the long-term keys the intruder knows are exactly thoseof the dishonest agents. We have already established all otherrelevant properties on higher levels of abstraction.

    Proposition 10. Let Ikrb3v be the intersection of the in-variants of krb3v and let Rv23 = R23 ∩ Σkrb2 × Ikrb3v . Thenwe have reach(krb3v) ⊆ Ikrb3v and krb3v vRv23,id krb2 .

    8.2 The Kerberos 4 protocol (L3)In the core Kerberos 4 protocol, the responder’s ticket

    is encrypted inside the initiator’s message from the server.Hence, message M2 is modified as follows.

    M2′. S→ A : {|Kab, B,Ts,Na, {|Kab, A,Ts |}shr(B) |}shr(A)The changes in the model are straightforward. Moreover,the simulation relation, R′23, is the same as R23 above exceptfor a minor change in the message abstraction function: thecryptographic form of message M2a, {|K,A, T,N,X |}shr(A),now includes a message variable X for the ticket.

    The refinement proof for krb3iv requires additional invari-ants. One of these describes the ticket’s shape and the en-crypted key K in message M2 sent to an honest initiator A.

    ticketkrb3iv ≡ {s | ∀A,B,N,K,X.{|K,B, T,N,X |}shr(A)∈ parts(s.IK) ∧ A /∈ bad→X = {|K,A, T |}shr(B) ∧K /∈ ran(shr) }

    Furthermore, we need to know that the addition of sessionkeys to the intruder’s knowledge does not reveal other keys.

    sessKkrb3iv ≡ {s | ∀KS,K. KS ⊆ key \ ran(shr)→(K ∈ analz(KS∪s.IK )↔ K ∈ KS∨K ∈ analz(s.IK ))}

    A similar invariant states that the intruder cannot derivenew nonces by learning new session keys. We then prove:

    Proposition 11. Let Ikrb3iv be the intersection of the in-variants of krb3iv and let Riv23 = R

    ′23 ∩ Σkrb2 × Ikrb3iv . Then

    reach(krb3iv) ⊆ Ikrb3iv and krb3iv vRiv23,id krb2 .

    9. RELATED WORK AND CONCLUSIONSThere are other proposals for developing security proto-

    cols by refinement using different formalisms such as the Bmethod [11], its combination with CSP [14], I/O automata [24],and abstract state machines (ASMs) [10]. None of thesecontinue their refinements to the level of a full Dolev-Yaointruder. Either they only consider an intruder that is pas-sive [24], defined ad-hoc [10, 14], or similar to our Level 2intruder [11]. This makes a comparison of their results withstandard protocol models difficult. Moreover, none providesa uniform and systematic development method such as ourfour-level refinement strategy and most of them develop in-dividual protocols rather than entire families.

    Mödersheim and Viganò [27] propose a compositional ap-proach, where protocol specifications may combine channelmessages and cryptographic messages. Channel messagescan later be replaced by protocols realizing their properties.Their objective is to identify the weakest condition that al-lows protocols to securely implement channels. In contrast,we refine channel protocols into concrete cryptographic pro-tocols. Moreover, we can realize a secure channel from theserver to the responder in our L2 models by piggy-backed(e.g., Kerberos 5) or encrypted (e.g., Kerberos 4) ticket for-warding via the initiator, which is impossible in their setting.

    Datta et al. [19] use protocol templates with messages con-taining function variables to specify and prove properties of

    protocol classes. Refinement here means instantiating func-tion variables and discharging the associated assumptions.Pavlovic et al. [16,32] similarly refine protocols by transform-ing messages and propose specialized formalisms for estab-lishing secrecy and authentication properties. In [16], theyalso derive core Kerberos 4 and 5 and NSSK. These refine-ments do not involve a fundamental change of the abstractionlevel since one instantiates abstract operations on messages.

    Several other researchers have analyzed Kerberos. Bellaand Riccobene [10] develop Kerberos 4 in three refinementsusing ASMs. They use a non-standard attacker model andprove mostly liveness properties (e.g., all runs reach a spe-cific state) instead of secrecy and authentication properties.Bella and Paulson model BAN Kerberos [9], Kerberos 4 [8],and Kerberos 5 [7] including session key compromise usingthe inductive approach [31]. However, they do not model areplay cache and prove only non-injective agreements.

    Butler et al. [12, 13] constructed detailed models of Ker-beros 5 using multiset rewriting. These models include cross-realm authentication and other realistic features such as op-tions, flags, and error handling. They manually constructtheir models and proofs in several“refinements”to keep themmanageable. However, their notion of refinement is informal.

    Simulation-based security [4, 15] is a paradigm for speci-fying idealized functionalities and implementing them usinga notion of secure emulation. Delaune et al. [20] have re-cently proposed an interesting symbolic version of this para-digm including a joint state result for asymmetric encryptionand applied it to derive the Needham-Schroeder-Lowe proto-col. In future work, we would like to investigate the relativescopes and potential synergies of our respective approaches.

    Our developments provide strong evidence that refinementsupports the systematic understanding and development offamilies of protocols. They also show that the developmentstrategy and tools we previously built can be extended toscale to realistic protocols with new features and challenges.

    A central part of this work has been the development andexploitation of guard protocols, which forms the bridge be-tween security properties and channel protocols, i.e., fromthe “what” to the “how”. Security guards realize propertiesabstractly. Moreover, they substantially simplify proof con-struction. They give rise to invariants in a canonical wayduring refinement, thereby simplifying invariant discovery.Moreover, the simulation relations used are either fixed (L1-L2 is superposition refinement [2]) or systematically derived(e.g., abstraction of runs to signals in L0-L1 refinement andcryptographic to channel messages in L2-L3 refinement).

    We ultimately envision tool-based development where en-gineers can choose standard properties and follow high-levelrecipes for building guard, channel, and crypto protocols,with tools checking their steps along the way. To achievethis, we will work into two directions. First, we want toextend the range of protocols that can be modeled and rea-soned about. For example, we plan to add support for Diffie-Hellman key agreement, compromising adversaries, and morecomplex properties such as perfect forward secrecy, possiblyalong the lines of [6]. Second, we would like to automatedevelopment based on our strategy. It should be possible toderive protocol models directly from the high-level protocoldescriptions such as the authentication graphs of Figure 2and sequence charts of Figures 3, 4, and 5. Moreover, withsuitable infrastructure it should be feasible to automaticallygenerate and (as far as possible) prove invariants and simu-lation relations, given their strong regularity.

  • AcknowledgementsWe would like to thank Ivano Somaini for his work on theIsabelle/HOL sources of the Kerberos protocols. We are alsograteful to Vincent Jugé, Son Thai Hoang, Eugen Zalinescu,Binh Thanh Nguyen, and Ognjen Maric for their carefulproof reading.

    10. REFERENCES[1] M. Abadi and L. Lamport. The existence of refinement

    mappings. Theor. Comput. Sci., 82(2):253–284, 1991.

    [2] J.-R. Abrial. Modeling in Event-B: System and SoftwareEngineering. Cambridge University Press, 2010.

    [3] A. Armando et al. The AVISPA Tool for the Auto-mated Validation of Internet Security Protocols and Ap-plications. In Proceedings of CAV’2005, volume 3576of Lecture Notes in Computer Science, pages 281–285.Springer, 2005.

    [4] M. Backes, B. Pfitzmann, and M. Waidner. The reac-tive simulatability (RSIM) framework for asynchronoussystems. Inf. Comput., 205(12):1685–1720, 2007.

    [5] D. Basin, S. Mödersheim, and L. Viganò. OFMC: Asymbolic model checker for security protocols. Inter-national Journal of Information Security, 4(3):181–208,June 2005.

    [6] D. A. Basin and C. J. F. Cremers. Modeling and ana-lyzing security in the presence of compromising adver-saries. In D. Gritzalis, B. Preneel, and M. Theohari-dou, editors, ESORICS, volume 6345 of Lecture Notesin Computer Science, pages 340–356. Springer, 2010.

    [7] G. Bella. Formal Correctness of Security Protocols. In-formation Security and Cryptography. Springer, 2007.

    [8] G. Bella and L. C. Paulson. Kerberos version 4: In-ductive analysis of the secrecy goals. In Proc. 5th Eu-ropean Symposium on Research in Computer Security(ESORICS), pages 361–375, 1998.

    [9] G. Bella and L. C. Paulson. Mechanising BAN Kerberosby the inductive method. In A. J. Hu and M. Y. Vardi,editors, CAV, volume 1427 of Lecture Notes in Com-puter Science, pages 416–427. Springer, 1998.

    [10] G. Bella and E. Riccobene. Formal analysis of the Ker-beros authentication system. Journal of Universal Com-puter Science, 3(12):1337–1381, 1997.

    [11] P. Bieber and N. Boulahia-Cuppens. Formal develop-ment of authentication protocols. In Sixth BCS-FACSRefinement Workshop, 1994.

    [12] F. Butler, I. Cervesato, A. D. Jaggard, and A. Scedrov.A formal analysis of some properties of Kerberos 5 usingMSR. In Proc. 15th IEEE Computer Security Founda-tions Workshop (CSFW), pages 175–. IEEE ComputerSociety, 2002.

    [13] F. Butler, I. Cervesato, A. D. Jaggard, A. Scedrov, andC. Walstad. Formal analysis of Kerberos 5. Theor. Com-put. Sci., 367:57–87, November 2006.

    [14] M. J. Butler. On the use of data refinement in the de-velopment of secure communications systems. FormalAspects of Computing, 14(1):2–34, 2002.

    [15] R. Canetti. Universally composable security: A newparadigm for cryptographic protocols. In FOCS, pages136–145, 2001.

    [16] I. Cervesato, C. Meadows, and D. Pavlovic. An en-capsulated authentication logic for reasoning about keydistribution protocols. In CSFW ’05: Proceedings of

    the 18th IEEE workshop on Computer Security Foun-dations, pages 48–61, Washington, DC, USA, 2005.

    [17] E. Cohen. First-order verification of cryptographic pro-tocols. Journal of Computer Security, 11(2):189–216,2003.

    [18] C. J. F. Cremers. The Scyther tool: Verification, falsifi-cation, and analysis of security protocols. In A. Guptaand S. Malik, editors, CAV, volume 5123 of LectureNotes in Computer Science, pages 414–418. Springer,2008.

    [19] A. Datta, A. Derek, J. C. Mitchell, and D. Pavlovic.A derivation system and compositionl logic for securityprotocols. Journal of Computer Security, 13:423–482,2005.

    [20] S. Delaune, S. Kremer, and O. Pereira. Simulation basedsecurity in the applied pi calculus. In R. Kannan andK. N. Kumar, editors, FSTTCS, volume 4 of LIPIcs,pages 169–180. Schloss Dagstuhl - Leibniz-Zentrum fuerInformatik, 2009.

    [21] D. E. Denning and G. M. Sacco. Timestamps in keydistribution protocols. Communications of the ACM,24(8):533–536, 1981.

    [22] L. Gong. Variations on the themes of message freshnessand replay - or the difficulty in devising formal meth-ods to analyze cryptographic protocols. In In Proceed-ings of the Computer Security Foundations WorkshopVI (CSFW), pages 131–136, 1993.

    [23] G. Lowe. A hierarchy of authentication specifications. InIEEE Computer Security Foundations Workshop, pages31–43, Los Alamitos, CA, USA, 1997. IEEE ComputerSociety.

    [24] N. A. Lynch. I/O automaton models and proofs forshared-key communication systems. In Proc. 12th IEEEComputer Security Foundations Workshop (CSFW),pages 14–29, 1999.

    [25] U. M. Maurer and P. E. Schmid. A calculus for securechannel establishment in open networks. In Proc. 9thEuropean Symposium on Research in Computer Security(ESORICS), pages 175–192, 1994.

    [26] R. Milner. An algebraic definition of simulation betweenprograms. In IJCAI, pages 481–489, 1971.

    [27] S. Mödersheim and L. Viganò. Secure pseudonymouschannels. In M. Backes and P. Ning, editors, Proc. 14thEuropean Symposium on Research in Computer Security(ESORICS), volume 5789 of Lecture Notes in ComputerScience, pages 337–354. Springer, 2009.

    [28] R. Needham and M. D. Schroeder. Using encryptionfor authentication in large data networks of computers.Communications of the ACM, 21(12):993–999, 1978.

    [29] B. C. Neuman and T. Ts’o. Kerberos: An authentica-tion service for computer networks. IEEE Communica-tions Magazine, 32(9):33–38, 1994.

    [30] T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL– A Proof Assistant for Higher-Order Logic, volume2283 of Lecture Notes in Computer Science. Springer,2002.

    [31] L. Paulson. The inductive approach to verifying cryp-tographic protocols. J. Computer Security, 6:85–128,1998.

    [32] D. Pavlovic and C. Meadows. Deriving secrecy in keyestablishment protocols. In Proc. 11th European Sym-

  • posium on Research in Computer Security (ESORICS),pages 384–403, 2006.

    [33] C. Sprenger and D. Basin. Developing security proto-cols by refinement. In Proc. 17th ACM Conference onComputer and Communications Security (CCS), pages361–374, October 4-8, Chicago, IL, USA, 2010.

    [34] J. G. Steiner, B. C. Neuman, and J. I. Schiller. Kerberos:An authentication service for open network systems. InWinter 1988 Usenix Conference, Feb. 1988.

    APPENDIXA. STEP 5 IN PROPOSITION 8(II) (L1)

    We sketch the proof of guard strengthening in the refine-ment of the abstract event commita0i([B,A], (Kab,Ts,Ta))by the concrete step5krb1 (A,B,Rb,Kab,Ts,Ta), which is partof the proof that establishes the responder’s injective agree-ment with the initiator. We define the function sigC(r) =mr that reconstructs the abstract signals from the runs r,where mr is defined as follows.

    mr(Running([B,A], (Kab,Ts,Ta))) = cImr(Commit([B,A], (Kab,Ts,Ta))) = cR

    where cI and cR are the following cardinalities:

    cI = |{Na | ∃nl. r(Na) = (A,B, Init(Kab, T s#Ta#nl))}|cR = |{Rb | ∃nl. r(Rb) = (A,B,Resp(Kab, T s#Ta#nl))}|

    For the remaining cases, we set mr(x) = 0. The guardstrengthening proof obligation requires that we prove

    cR < cI (4)

    assuming that A and B are honest and that the guards ofstep5krb1 are satisfied. Since we use a cache to prevent aresponder B from accepting the same key Kab and times-tamp Ta several times, there cannot be any prior executionof Step 5 with these parameters. Therefore, we strengthenthe subgoal (4) by decomposing it as follows.

    cI > 0 (5)

    cR = 0 (6)

    Subgoal (5) suggests the authentication guard for step5krb1 .Once the guard has been added, the subgoal is proved imme-diately. The subgoal (6) requires that there is no responderrun Rb′ and no list nl corresponding to a commit signal, i.e.,

    s.runs(Rb′) = (A,B,Resp(Kab, T s#Ta#nl)). (7)

    This is guaranteed by an invariant stating that condition (7)and the validity of Ta (i.e., s.clk < Ta + La) entail theexistence of a cache entry (B,Kab,Ta) ∈ s.cch for suchruns. This invariant leads to a contradiction with the guard(B,Kab,Ta) /∈ s.cch of Step 5 that checks for replays. Thisconcludes the proof that the guards of the step5krb1 eventimply the guard of commita0i .

    B. RESPONDER’S KEY FRESHNESS (L1)Key freshness for the responder in krb1 expresses that a

    session key K appearing in a run of a responder B with aninitiator A uniquely identifies that run, provided that A andB are honest.

    rfreshkrb1 ≡ {s | ∀R,R′, A,A′, B,B′,K,Ts,Ts ′,Ta,Ta ′.s.runs(R ) = (A ,B ,Resp(K, [Ts ,Ta ])) ∧s.runs(R′) = (A′, B′,Resp(K, [Ts ′,Ta ′])) ∧B /∈ bad ∧ A /∈ bad→ Rb = Rb′ }

    We have proved that this is an external invariant of krb1 .

    Proposition 12. oreach(krb1 ) ⊆ rfreshkrb1 .

    All cases except for the critical Step 5 by the responder areproved automatically. In the latter case, the proof relies onmost other invariants proved for krb1 or inherited from itsachestors. Secrecy and the responder’s agreement with theserver are used to exclude the cases where the initiator A′ orthe responder B′ in the run R′ is dishonest. Otherwise, weuse the agreement with the initiator to reduce the proof tothe initiator’s key freshness (proved for kt1in).

    C. STEP 5 IN PROPOSITION 9 (L2)The guard strengthening proof obligation for Step 5 re-

    quires that the guards for receiving M2b and M3 in the re-sponder B’s step5 krb2 imply the following three non-localsecurity guards of step5 krb1 : the authorization guard

    (Kab, B) ∈ azC(s.runs), (8)

    the server authentication guard

    B /∈ bad→∃Na, nl. s.runs(Kab) = (A,B, Serv(Na#Ts#nl)), (9)

    and the initiator authentication guard

    B /∈ bad ∧A /∈ bad→∃Na, nl. s.runs(Na) = (A,B, Init(Kab,Ts#Ta#nl)).

    (10)

    The following invariant directly arises from the proof obli-gation expressing the strengthening of the server authentica-tion guard (9). It expresses the guarantee that an honest Bgets from receiving message M2b.

    M2b+krb2 ≡ {s | ∀Kab, A,B,Ts.Secure(B,M2b(Kab, A,Ts)) ∈ s.chan ∧ B /∈ bad→∃Na. s.runs(Kab) = (A,B, Serv([Na,Ts])) }

    A related invariant, M2b−krb2 , describes the case where B isdishonest. In this case, Kab is either a corrupted key or asession key generated by the server for some dishonest agent.These two invariants suffice to derive the strengthening of theauthorization guard (8).

    The strengthening of the initiator authentication guard (10)corresponds to the following proof obligation. It describesthe authentication guarantee that the responder B gets fromreceiving messages M2b and M3: A knows Kab, Ts, and Tain an initiator run Na, provided that A and B are honest.

    Secure(B,M2b(Kab, A,Ts)) ∈ s.chan ∧dAuth(Kab,M3(A,Ta)) ∈ s.chan ∧A /∈ bad ∧ B /∈ bad→∃Na, nl. s.runs(Na) = (A,B, Init(Kab, (Ts#Ta#nl)))

    (11)When trying to prove that this is an invariant, we got

    stuck in a proof state that suggested replacing the honestyof A and B by the secrecy of Kab. This enabled the successfulcompletion of the proof.

    M3krb2 ≡ {s | ∀Kab, A,B,Ts,Ta.Secure(B,M2b(Kab, A,Ts)) ∈ s.chan ∧dAuth(Kab,M3(A,Ta)) ∈ s.chan ∧Kab /∈ ikk(s.chan)→∃Na, nl. s.runs(Na) = (A,B, Init(Kab, (Ts#Ta#nl))) }

    To establish the required guard strengthening (11) us-ing the invariant M3krb2 , it is sufficient to show that its

  • first premise (i.e., Secure(B,M2b(Kab, A,Ts)) ∈ s.chan) andthird premise (i.e., the honesty of A and B) imply Kab /∈ikk(s.chan). This in turn follows from the invariant M2b+krb2above and the following invariant, which guarantees to theserver that the intruder never learns a session key generatedfor honest agents.

    ikkSvkrb2 ≡ {s | ∀Kab, A,B, nl.s.runs(Kab) = (A,B, Serv(nl)) ∧A /∈ bad ∧B /∈ bad→Kab /∈ ikk(s.chan) }

    This completes our sketch of the refinement proof of step5krb1by step5krb1 .