on 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/almost... · struction of...

27
On 0,1-laws and super-simplicity Cameron Donnay Hill Abstract In Hill [13], it is shown (as a special case) that for an algebraically trivial Fraïssé class K, if the generic theory T K is an almost-sure theory by way of an asymptotic probability measure with independent sampling, then T K is super-simple of SU -rank 1. Proving the converse requires a method for producing asymptotic probability measures with the appropriate properties, and this turns out to be rather difficult. Viewing super-simplicity as just one level of an infinite hierarchy of higher amalgamation properties, Kruckman [20] uses all of these properties to produce asymptotic probability mea- sures that yield the 0,1-law. The class of theories that meet the requirements of that construc- tion is certainly smaller than the super-simple theories, likely much smaller. Here, building on Ackerman-Freer-Patel [2], Lovasz [22], and related work, we give a much more subtle con- struction of useful asymptotic probability measures using nothing more than super-simplicity cast as a single higher amalgamation property. With this construction, we show that if T K is super-simple of SU -rank 1, then it is an almost-sure theory. This resolves a conjecture of Hill-Kruckman [14], an open question on the generic tetrahedron-free 3-hypergraph, and an (apparently) open question of Cherlin [6]. 1 Introduction 1.1 Background The first notice of the phenomenon of 0,1-laws for first-order logic over classes of finite structures lies in [9] and [10], where the 0,1-law for finite graphs is demonstrated, meaning that the almost-sure theory is complete. Naturally, there have been a number of developments in the line of demonstrating 0,1-laws for other classes of finite structures. This includes [19], for example, demonstrating the 0,1-law for the class of K n -free graphs (fixed but arbitrary n 3) relative to uniform probability measures. A second example is in work on “sparse” classes of structures, as in [5, 21, 27]. The project of this paper is slightly different: We wish to answer the question, “What are almost-sure theories like?” To address this question, we restrict our attention to a smaller part of the 0,1-law phenomenon: The asymptotic probability measures in question must be compatible of with the invariant- measures framework of [2]. The work of [9] and [10] falls under this rubric, but the uniform measures on triangle-free graphs and the measures associated with classes of sparse structures (e.g. edge-probability n -α for irrational α> 0) are excluded. We interest ourselves only in 0,1-laws for Fraïssé classes K such that the almost-sure theory is equal to the generic theory T K of K (the theory of the Fraïssé limit). This excludes [19] again (since the almost-sure theory is the theory of the generic bi-partite graph), but [9] and [10] remain. 1

Upload: others

Post on 16-Sep-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

On 0,1-laws and super-simplicity

Cameron Donnay Hill

Abstract

In Hill [13], it is shown (as a special case) that for an algebraically trivial Fraïssé class K,if the generic theory TK is an almost-sure theory by way of an asymptotic probability measurewith independent sampling, then TK is super-simple of SU -rank 1. Proving the converse requiresa method for producing asymptotic probability measures with the appropriate properties, andthis turns out to be rather difficult.

Viewing super-simplicity as just one level of an infinite hierarchy of higher amalgamationproperties, Kruckman [20] uses all of these properties to produce asymptotic probability mea-sures that yield the 0,1-law. The class of theories that meet the requirements of that construc-tion is certainly smaller than the super-simple theories, likely much smaller. Here, buildingon Ackerman-Freer-Patel [2], Lovasz [22], and related work, we give a much more subtle con-struction of useful asymptotic probability measures using nothing more than super-simplicitycast as a single higher amalgamation property. With this construction, we show that if TKis super-simple of SU -rank 1, then it is an almost-sure theory. This resolves a conjecture ofHill-Kruckman [14], an open question on the generic tetrahedron-free 3-hypergraph, and an(apparently) open question of Cherlin [6].

1 Introduction

1.1 Background

The first notice of the phenomenon of 0,1-laws for first-order logic over classes of finite structureslies in [9] and [10], where the 0,1-law for finite graphs is demonstrated, meaning that the almost-suretheory is complete. Naturally, there have been a number of developments in the line of demonstrating0,1-laws for other classes of finite structures. This includes [19], for example, demonstrating the0,1-law for the class of Kn-free graphs (fixed but arbitrary n ≥ 3) relative to uniform probabilitymeasures. A second example is in work on “sparse” classes of structures, as in [5, 21, 27]. Theproject of this paper is slightly different: We wish to answer the question, “What are almost-suretheories like?” To address this question, we restrict our attention to a smaller part of the 0,1-lawphenomenon:

• The asymptotic probability measures in question must be compatible of with the invariant-measures framework of [2]. The work of [9] and [10] falls under this rubric, but the uniformmeasures on triangle-free graphs and the measures associated with classes of sparse structures(e.g. edge-probability n−α for irrational α > 0) are excluded.

• We interest ourselves only in 0,1-laws for Fraïssé classes K such that the almost-sure theoryis equal to the generic theory TK of K (the theory of the Fraïssé limit). This excludes [19]again (since the almost-sure theory is the theory of the generic bi-partite graph), but [9] and[10] remain.

1

Page 2: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

A consequence of these restrictions, proved in [12], is that if TK is an almost-sure theory underthese restrictions, then it must be algebraically trivial. Even with these restrictions, though, thereseems to be a very rich theory to investigate both for its own sake and for its relevance to themore general phenomenon of pseudo-finiteness of ℵ0-categorical theories. A much deeper resultthan algebraic triviality was obtained in [13], showing that if the asymptotic probability measure inquestion is “reasonable” in that it has good conditional independence properties, then the generictheory – identical to the almost-sure theory – is super-simple of rank 1; even more, the Fraïssé classis “almost” a 1-dimensional asymptotic class in the sense of [23], and its generic theory is measurable:

Theorem 1.1.1 ([13]). Let K be a combinatorial class.1 Suppose there is an asymptotic probabilitymeasure m for K with independent sampling such that Tm = TK. Then TK is MS -measurable overthe counting/algebraic dimension, and from this, it follows that TK super-simple of SU -rank 1.

1.2 Contributions

A bit of reflection on Theorem 1.1.1 reveals that a proof of its converse would yield a complete char-acterization of “strongly” almost-sure combinatorial theories without any reference to probabilities atall. Moreover, such a proof would, presumably, require a general method for producing asymptoticprobability measures with appropriate properties for any super-simple combinatorial theory. Ourmain contribution in this paper is precisely a proof of the converse of Theorem 1.1.1 (as conjecturedin [14]), and indeed, we present a general method of generating asymptotic probability measures ofthe right kind. That is, we prove:

Theorem 4.2.1. Let K be a combinatorial class. The following are equivalent:

1. TK is super-simple of SU -rank 1.

2. There is an asymptotic probability measure m for K with independent sampling whose almost-suretheory Tm is equal to TK.

As an immediate consequence of Theorem 4.2.1, we find that whenever TK is super-simple ofSU -rank 1, we obtain (modulo the asymptotic probability measure) tight estimates of cardinalitiesof definable sets in members of K. Moreover, for combinatorial theories, super-simplicity and MS -measurability are actually identical.

Corollary 1.2.1. Let K be a combinatorial class. If TK is super-simple of SU -rank 1, then itis MS -measurable over the counting/algebraic dimension, and K itself is “almost” a 1-dimensionalasymptotic class.

We also resolve an open question about concrete combinatorial classes. To the best of our knowl-edge, it was not know previously whether the theory of the generic tetrahedron-free 3-hypergraph iseven pseudo-finite. As a corollary of Theorem 4.2.1, we find that this theory is not just pseudo-finitebut that it is an almost-sure theory.2

1Combinatorial class means “algebraically trivial Fraïssé class in a countable relational language with finitelymany isomorphism types of each finite cardinality.” A combinatorial theory, then, is just the generic theory of acombinatorial class.

2The construction from [2] of invariant measures for arbitrary algebraically trivial countable structures is, perhaps,complicated enough to justify our previous ignorance of whether TG3:4 can be recovered as an almost-sure theory.

2

Page 3: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

Corollary 1.2.2. In the language L with one 3-ary relation symbol, and let G3:4 be the Fraïsséclass of all finite tetrahedron-free 3-hypergraphs. Then G3:4 is a combinatorial class, and its generictheory TG3:4 is super-simple of SU -rank 1. Thus, there is an asymptotic probability measure m forG3:4 with independent sampling such that Tm = TG3:4 .

This result generalizes to a very large family of combinatorial classes in finite relational languages.For a finite relational language L , let S = S(L ) be the class of finite L -structures A such thatfor every n-ary relation symbol, (i) RA(a) ⇒ RA(aσ(0), ..., aσ(n−1)) for each σ ∈ Sym(n), and (ii)RA(a) ⇒ ai 6= aj whenever i < j < n. For each k, a structure A ∈ S is k-irreducible if for anya0, ..., ak−1 ∈ A, there are R ∈ sig(L ) and b ∈ RA such that ai ∈ b for each i < k. For F ⊂ S,we write SF to denote the class of structures B ∈ S such that for no A ∈ F is there an injectivehomomorphism A → B. It is well-known that SF is a combinatorial class provided that everyA ∈ F is 2-irreducible. In unpublished work, G. Conant has shown that for F ⊂ S such that SF isa combinatorial class, TSF

is super-simple of SU -rank 1 if and only if every A ∈ F is 3-irreducible.This fact together with Theorem 4.2.1 gives us a complete characterization of almost-sure theories ofclasses of “societies” (in the language of [25]) obtained by excluding/forbidding irreducible structures– again, eliminating any reference to probabilities. By choosing L in the right way, we then useCorollary 1.2.3 resolve a question of Cherlin [6].

Corollary 1.2.3. Let L be a finite relational language, and let F ⊂ S such that every A ∈ F is2-irreducible. The following are equivalent:

1. Every A ∈ F is 3-irreducible.

2. TSFis an almost-sure theory by way of an asymptotic probability measure with independent

sampling.

1.3 Outline of the paper

In Section 2, we formally define combinatorial classes and their generic models. We also introducea family of higher amalgamation properties (appearing also in [20] with a slightly different names),and we give the characterization of super-simplicity in terms of one of these properties. Finally, wetranslate types over the generic model into certain kinds of expansions of the generic model, andwe show that super-simplicity is sufficient for the existence of a generic type (in a Baire categorysense).

In Section 3, we recall three different sorts of measures that are relevant to model theory –Keisler measures, invariant measures, and asymptotic probability measures – and we expose howthese three notions are related to each other. We recall a relatively innocuous theorem of [14]which draws an equivalence between asymptotic probability measures with independent samplingand certain kinds of Keisler measures. This result is used later in the proof of Theorem 4.2.1.

In work spread across Sections ?? and 4, we show that a “balanced” invariant measure concen-trated on the generic type engenders the sort of Keisler measures that, in turn, yield asymptoticprobability measures with independent sampling. That is, we show that the existence of such abalanced invariant measure is enough to prove Theorem 4.2.1, and then (in Section 4), we showhow to construct balanced invariant measures from certain a priori weaker invariant measures.

In Section 5, we collect some further consequences of Theorem 4.2.1. We prove a generalization ofa theorem of [20] that allows one to prove pseudo-finiteness of a combinatorial theory by showing that

3

Page 4: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

it is approximable by simple theories in a certain sense. We also make some conjectures connectingour work in this paper to a program for resolving Cherlin’s Question: “Is the Henson graph pseudo-finite?” Finally, we use Corollary 1.2.3 to address another question of Cherlin (Problem A of [6]):“Are there uncountably many pseudo-finite countable ℵ0-categorical structures?”

4

Page 5: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

2 Combinatorial classes, higher amalgamation, and expansions ofgeneric models

In this section, we introduce the definitions and conventions that a priori do not pertain directly topseudo-finiteness or 0,1-laws. The most useful new result here is Theorem 2.3.4, which asserts thatsuper-simplicity with SU -rank 1 is sufficient for the existence of a generic type.3 Later, concentratingmeasure on the isomorphism type of a structure associated with the generic type is a key step inthe proof of Theorem 4.2.1.

2.1 Combinatorial classes

Throughout this paper, the signature sig(L ) of any language L under discussion will consist ofcountably many relation symbols and no constant or function symbols. We often write “R(n) ∈sig(L )” to mean that R is an n-ary relation symbol in the signature underlying L .

For infinite structures, our notation follows that of [24]. So, for example, we use calligraphicupper-case letters likeM to denote infinite structures with universeM , and in general, our notationfor such structures is more or less standard. For finite structures, we use simple upper-case letterslike A,B,C, and we identify finite structures with their universes unless some other set is explicitlymentioned. For a subset A ofM , whereM is an infinite structure,M�A is the induced substructureofM with universe A; when no confusion is likely to arise, however, we will often write A insteadofM�A. If B is a finite structure and X ⊆ B, then B�X is likewise the induced substructure of Bwith universe X. A structureM is algebraically trivial if aclM(A) = A for all A ⊆M . WhenM isℵ0-categorical (by convention, in a countable language), we often tacitly identify a type p(x) over afinite set C ⊂fin M with a formula that isolates it; in any case, the solution set p(M) is a definableset. We write S∗1(M) to denote the set of complete types p(x) over M such that p(x) � x 6=a foreach a ∈M .

We coin the term “combinatorial class” mainly as a convenience – it allows us to avoid saying“algebraically trivial Fraïssé class” over and over again.

Definition 2.1.1 (Combinatorial classes). Let K be a class of finite L -structures. Then K is acombinatorial class if it has the following properties:

• K is closed under isomorphism, and for every n, the set {A ∈ K : |A| = n} /∼= is finite.

• (Heredity property - HP) For any B ∈ K and any induced substructure A ≤ B, A ∈ K aswell.

• (Disjoint joint-embedding property - disjoint-JEP) For any A1, A2 ∈ K, there are B ∈ K andembeddings fi : Ai → B (i = 1, 2) such that f1A1 ∩ f2A2 = ∅.

• (Disjoint amalgamation property - disjoint-AP) For all A,B1, B2 ∈ K and embeddings fi :A → Bi (i = 1, 2), there are C ∈ K and embeddings gi : Bi → C such that g1 ◦ f1 = g2 ◦ f2and g1B1 ∩ g2B2 = g1f1A = g2f2A.

Thus, a combinatorial class is just a slightly special sort of Fraïssé class.3The terminology “generic type” is appropriate here because it is generic in the sense of Baire category. It’s just

a little unfortunate that “generic type” is also used heavily in the model theory of groups.

5

Page 6: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

The “fundamental” theorem of combinatorial classes is the following (whose proof can be foundin [15] and several other places):

Theorem 2.1.2. If K is a combinatorial class, then there is a countably infinite structure M, ageneric model of K, with the following properties:

• (K-closedness) For every X ⊂fin M ,M�X ∈ K.

• (K-universality) For every A ∈ K, there is an embedding A→M.

• (Ultrahomogeneity) For any A,B ∈ K with A ≤ B and any embedding f0 : A →M, there isan embedding f : B →M such that f0 ⊆ f .

• M is algebraically trivial.

Any countable structure with the first three of these properties is isomorphic toM, soM is usuallycalled the generic model of K (also called the Fraïssé limit of K), and the generic theory TK =Th(M) is well-defined in terms of K. This theory TK is ℵ0-categorical, eliminates quantifiers, andevery one of its models is algebraically trivial.

Remark 2.1.3. Suppose T is an algebraically trivial ℵ0-categorical theory in a countable language.Then there is a combinatorial class K such that T and TK are inter-definable – take K to be the ageof the expansion of the countable modelM � T obtained by adding relation symbols to name every0-definable set (i.e. by morleyizing T ). Thus, if we wish to study pseudo-finiteness of algebraicallytrivial ℵ0-categorical theories, we lose nothing in restricting our attention to combinatorial classesand their generic theories.

From now on...

Throughout the remainder of this paper, K is a combinatorial class with language L , andM is (afixed copy of) its generic model, with universe M .

2.2 Higher amalgamation

We introduce a hierarchy of higher amalgamation properties for classes of finite. With a slightlydifferent notation, these appeared in [20], and very similar, if not identical, notions appear in [28]and in the literature on abstract elementary classes.

Definition 2.2.1. Let 2 ≤ n < ω. As usual, it seems, we write P−(n) for the set of proper subsetsof n = {0, 1, ..., n − 1}. An n-disjoint amalgamation problem in K (n-DAP problem) is a family(As)s∈P−(n) of structures in K such that for all s, t ( n:

• s ⊆ t⇒ As ≤ At;

• As ∩At = As∩t as sets.

A solution of (As)s∈P−(n) in K is a structure B ∈ K such that As ≤ B for each s ( n. We allowA∅ = ∅, but this is not required. We say that K has n-DAP if every n-DAP problem in K has asolution in K.

We note that any class can have n-DAP or not for any n. In fact, “K is a combinatorial class”is equivalent to “K has HP and 2-DAP.”

6

Page 7: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

It is not difficult to verify that the class of all finite graphs has n-DAP for every n, and ingeneral, S(L ) has n-DAP for every n, whenever L is a finite relational language. On the otherhand, the class H of finite triangle-free graphs does not have 3-DAP, and the class G3:4 has 3-DAPbut does not have 4-DAP. “n-DAP for every n” makes it easy to produce asymptotic probabilitymeasures that engender 0,1-laws (see Definition 3.1.1), but it is much less obvious that 3-DAP aloneis sufficient. However, 3-DAP does have an immediate equivalent in model-theoretic terminology,as expressed in Proposition 2.2.3 below.

Observation 2.2.2. Suppose TK is super-simple of SU -rank 1. Then:

1. TK has trivial forking dependence (in the real sorts): A |̂fCB ⇔ A ∩B ⊆ C.

2. Every finite subset C ⊂fin M is an amalgamation base for non-forking independence (in realsorts).

Proposition 2.2.3. The following are equivalent:

1. TK is super-simple of SU -rank 1.

2. K has 3-DAP.

Proof. For 1⇒2: Let (As)s∈P−(3) be a 3-DAP problem in K. We may rename the structuresinvolved as follows

• A∅ = C;

• A{1} = B1, A{2} = B2, and A{3} = Ce, where e ∩B1B2 = ∅;

• A{1,3} = B1e, A{2,3} = B2e, and A{1,2} = B1B2.

Since K has 2-DAP, we may assume that C,B1, B2 are substructures ofM and that there are a1, a2inM such that B1a1 ∼=B1 B1e and B2a2 ∼=B2 B2e. By item 1 (since ai ∩Bi = ∅ ⊆ C), we see thatB1 ∩ B2 = C, a1 ≡C a2, B1 |̂

f

CB2, a1 |̂f

CB1, and a2 |̂f

CB2. Also by item 1, there is an a in Msuch that a ≡B1 a1, a ≡B2 a2, and a |̂

f

CB1B2. Setting D = B1B2a, we have a solution of (As)s inK.

For 2⇒1 (from [20]): One easily verifies that |̂= has all the properties of an independencerelation required except possibly independent-amalgamation (where A |̂=CB means A ∩ B ⊆ C).We will show that if K has 3-DAP, then |̂= has independent-amalgamation – in which case, itmust be non-forking independence in a simple theory, yielding item 1. So, let C,B1, B2 ⊂fin M ,a1, a2 ∈ Mn such that B1 ∩ B2 = C (so B1 |̂

=CB2), a1 ≡C a2, a1 |̂

=CB1, and a2 |̂

=CB2. We pick a

new tuple of elements e and make structures in K as follows:

• A∅ = C

• A{1} = B1, A{2} = B2, and A{3} = Ce ∼=C Ca1 ∼=C Ca2.

• A{1,3} = B1e ∼=B1 B1a1, A{2,3} = B2e ∼=B2 B2a2, and A{1,2} = B1B2

We have constructed a 3-DAP problem (As)s∈P−(3) in K, so since K has 3-DAP, there is a solutionfor it, D ∈ K. It follows that qftp(a1/B1) ∪ qftp(a2/B2) ∪ “x1 ∩ x2 = ∅” is consistent, and anyrealization of this type suffices.

7

Page 8: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

2.3 Special expansions

In this subsection, we will see that 3-DAP is sufficient for the existence of a generic type overM.To prove existence and to use generic types later on, it is natural recast types q(x) ∈ S∗1(M) asexpansions Mq of M. For the proof of Theorem 4.2.1, this point of view also provides us witha useful isomorphism-type on which to concentrate measure in the sense of [2]. The existence ofgeneric types given 3-DAP is proved in Theorem 2.3.4 below, and that is the one important factverified in this subsection.

Definition 2.3.1. We fix a type p(x0, ...xk−1) ∈ S(TK) such that p(x) �∧i<j xi 6= xj . We write

S∗p(M) for the set of complete types q(x) overM extending p such that a � q ⇒ a ∩M = ∅.Let τp0 be the signature consisting of relation symbols R(n)

p for each type p(x, y) ∈ Sn+k(TK)such that p(x, y) � p(x) ∧ “x ∩ y = ∅” ∧

∧i<j yi 6= yj (n > 0). Then let L + be the language with

sig(L p) = sig(L ) ∪ τp0 . For q(x) ∈ S∗p(M), letMq be the L p-expansion ofM such that

a ∈ RMqp ⇔ p(x, a) ⊂ q(x)

whenever a ∈Mn, p(x, y) ∈ Sn+k(TK) such that p(x, y) � p(x)∧“x ∩ y = ∅”∧∧i<j yi 6= yj (n > 0).

We define Kp to be the isomorphism-closure of{Mq�C : C ⊂fin M, q ∈ S∗p(M)

}which has HP but need not have AP or JEP in general.

Let’s say that a type q(x) ∈ S∗p(M) is p-generic if Kp is a combinatorial class and Mq is (acopy of) its generic model.

Observation 2.3.2. Fix p(x) ∈ Sk(TK) such that p(x) �∧i<j xi 6= xj . Let A ∈ Kp – say |A| = n –

and let a be an enumeration of A. Let p(x, y) ∈ Sn+k(TK) such that A � Rp(a), and let A− = A�L .Then, the pair (A−, p(x, a)) completely determines A in the following sense: Suppose B− ∈ K andf : A− → B− is an isomorphism of L -structures; if B ∈ Kp is an expansion of B− such thatB � Rp(fa), then f is also an L p-isomorphism A → B. We also note that p(x, y) need not be acomplete type – a quantifier-free-complete type would do.

This observation suggests a convenient notation for passing from structures A ∈ K to expansionsA′ ∈ Kp. Namely, suppose A ∈ K, a is an enumeration of A, and p = p(x, a) is a quantifier-free-complete type over A such that p(x, a) � p(x) ∧ “x ∩A = ∅”. Then, A+p denotes the uniquestructure A′ ∈ Kp described in the previous paragraph.

Lemma 2.3.3. If K has 3-DAP, then Kp is a combinatorial class for every p.

Proof. Let C,B1, B2 ∈ Kp with C ≤ B1, C ≤ B2, and B1 ∩ B2 = C. We must show that there issome D ∈ K+ such that B1 ≤ D and B2 ≤ D. Let C− = C�L , B−1 = B1�L , and B−2 = B2�L .We allow C to be the empty structure, and without loss of generality, we assume that B−1 , B

−2 <M

Let c, b1, b2 be enumerations of C, B1\C, and B2\C, respectively. Then there are quantifier-free-complete types q(x, y), q1(x, y, z1), q2(x, y, z2) such that C � Rq(c), B1 � Rq1(c, b1), B2 � Rq2(c, b2),and these completely determine C,B1, B2 over C−, B−1 , B

−2 in the sense of Observation 2.3.2. That

is, C = C−+q(x, c), and so on. Clearly, qi(x, y, zi) � q(x, y) for both i = 1, 2. Now, we define a3-DAP problem in K:

• A∅ = C−;

8

Page 9: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

• A{1} = B−1 and A{2} = B−2 ;

• A{3} = eC−, where e is a k-tuple of distinct new elements outside of B1B2, and we requirethat A{3} � q(e, c);

• A{1,2} =M�(B1B2);

• for both i = 1, 2, A{i,3} is the L -structure with universe eB−i extending B−i and such thatA{i,3} � qi(e, c, bi).

Since K has 3-DAP, we recover a solution E− of (As)s∈P−(3); we may assume that As ≤ E− <Mfor each s ( 3. Let p(x, y, z1, z2) be the quantifier-free-complete type such that M � p(e, c, b1b2).Finally, we define D to be the L +-structure such that D�L = M�(B1B2) and D � Rp(c, b1, b2).(As above, these data completely determine a unique L +-structure expandingM�(B1B2).) Sincep(x, y, z1, z2) � qi(x, y, zi) for both i = 1, 2, we have B1 ≤ D and B2 ≤ D ∈ Kp, and this completesthe proof.

Theorem 2.3.4. K has 3-DAP if and only if there is a p-generic type overM for every p ∈ S∗(TK).

Proof. For “only if,” let p(x0, ..., xk−1) such that p(x) �∧i<j xi 6= xj . Since Kp is a combinatorial

class (by Lemma 2.3.3), it has a generic modelMp, which we may assume is an expansion ofM –i.e. Mp�L =M. For each s ⊂fin M , we fix an enumeration as of s and a type p(x, ys) ∈ S|s|+k(TK)such that Mp � Rps(as) and so that the data (M�s, ps(x, as)) determines M+�s in the sense ofObservation 2.3.2. We observe that pt(x, at) � ps(x, as) whenever s ⊆ t ⊂fin M , and from this itfollows that π(x) =

⋃{ps(x, as) : s ⊂fin M} is consistent. Since π�C isolates a complete type over

C for every C ⊂fin M , it’s also clear that π extends uniquely to a complete type q overM . Since foreach s ⊂fin M , (M�s, ps(x, as)) determinesMp�s, we haveMp�s <Mq for each s ⊂fin M ; henceMq =Mp.

For “if ”, Let (As)s∈P−(3) be a 3-DAP problem in K. Then, let e = (e0, ..., ek−1) enumerateA{0} \A∅, and set p(x) = qftpA{0}(e). Moreover:

• Let C = A∅ and q(x, c) = qftpA{0}(e/C).

• Let B1 = A{1} and B2 = A{2}, and for i = 1, 2, set qi(x, bi, c) = qftpA{0}(e/Bi), where bienumerates Bi \ C.

We may assume that C < M. Consequently, there are embeddings fi : Bi → M such thatfi�C = idC (i = 1, 2) such that f1B1 ∩ f2B2 = C. Now, let C+ = C+q, B+

1 = B1+q1, andB+

2 = B2+q2. Since Kp is a combinatorial class (has 2-DAP) and arguing as in the previousparagraph to convert a modelM′ � TKp into a type, there is a type q ∈ S∗k(M) extending p suchthat B+

1 <Mq and B+2 <Mq. Since B1B2 is finite, there is a realization e′ of q�B1B2 inM. Up to

exchanging e for e′, the structureM�B1B2e′ solves the original 3-DAP problem (As)s∈P−(3).

Observation 2.3.5. Let p(x0, ..., xk−1) ∈ Sk(TK) and p0(x0, ..., xm−1) ∈ Sm(TK), where m ≤ k andp � p0. Also, let σ ∈ Sym(k), and let pσ be the type such that pσ(xσ(0), ..., xσ(k−1)) = p(x).

1. Mp�L p0 ∼=Mp0 via an automorphism ofM.

2. After applying an automorphism ofM,Mp andMpσ are inter-definable.

9

Page 10: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

3 Definitions and some background: Keisler measures, invariantmeasures, and asymptotic probability measures

There are several different sorts of probability measures that have some relevance to model theory.Here, we make use of three of these, and we introduce their definitions here. The connectionsbetween them were investigated in some detail in [14], and we will make use of some of those resultshere.

3.1 Asymptotic probability measures

Definition 3.1.1. For each s ⊂fin M , K[s] is the set of structures A ∈ K with universe exactly s.An asymptotic probability measure for K (APM for K) is a family m = (ms)s⊂finM where:

• for each s ⊂fin M , ms : K[s]→ [0, 1] is a probability mass function;

• for all s ⊆ t ⊂fin M and A ∈ K[s], ms(A) =∑

A≤B∈K[t]mt(B);

• for all s ⊂fin M , σ ∈ Sym(M), and A ∈ K[s], mσs(σA) = ms(A).

For a sentence ϕ of L and s ⊂fin M , we define Pms [ϕ] =∑{ms(A) : A � ϕ}. The almost-sure

theory of m is Tm ={ϕ ∈ Sent(L ) : lim|s|→∞ Pms [ϕ] = 1

}.

An APM m = (ms)s for K is said to have independent sampling if for all s0 ⊂ s, C ∈ Ks0 ,pairwise distinct i0, ..., in−1 ∈ s \ s0, and At ∈ Ks0∪{it} extending C (t < n), we have

Pms

[A0 ∧ ... ∧An−1

∣∣C] =∏t<n

Pms

[At∣∣C]

where A0 ∧ · · · ∧An−1 stands for the event{B ∈ Ks :

∧t<nAt ≤ B

}, and so forth.

Theorem 3.1.2 (Hill-Kruckman [14]). Suppose that this an APM m for K with independent sam-pling such that ms(A) > 0 for all s ⊂fin M and A ∈ Ks. Then Tm = TK, and TK is MS -measurableand super-simple of SU -rank 1.

Theorem 3.1.3 (Kruckman [20]). Suppose K has n-DAP for all n. Then there is an APM m forK with independent sampling such that Tm = TK.

3.2 Probabilistic MS -measures

Here, we begin by introducing the original formulation of MS-measurability, which is appropriatefor arbitrary infinite structures.

Definition 3.2.1. Let N be an infinite structure. Consider a pair of functions δ : Def(N ) → Nand µ : Def(N )→ [0,∞); we say that N is MS-measurable via (δ, µ) if the following requirementsare met:

• If X is finite, then δ(X) = 0 and µ(X) = |X|.

• If X is infinite, then δ(X) > 0 and µ(X) > 0.

10

Page 11: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

• For each formula ϕ(x, y) in the language of N , the set

Hϕ ={(δ(ϕ(M, b)), µ(ϕ(M, b))

): b ∈M |y|

}is finite, and for each pair (d0,m0) ∈ Hϕ, the set{

b ∈M |y| :(δ(ϕ(M, b)), µ(ϕ(M, b))

)= (d0,m0)

}is 0-definable.

• (Fubini) Let X,Y be definable sets, and let f : X → Y be a surjective definable function.Suppose that for some (d0,m0) ∈ N × [0,∞), δ(f−1(b)) = d0 and µ(f−1(b)) = m0 for allb ∈ Y . Then

δ(X) = d0 + δ(Y ),

µ(X) = m0 · µ(Y ).

When N is MS-measurable via (δ, µ), the function h : Def(M) → N× [0,∞) : X 7→ (δ(X), µ(X))is called the measuring function, while δ alone is called a dimension function

Theorem 3.2.2 (Corollary 3.6 of [8]). Suppose N is MS-measurable via (δ, µ). Then for everyX ∈ Def(N ), D(X) ≤ δ(X). It follows that Th(N ) is super-simple of finite SU-rank.

In our setting, the definable functions are severely restricted by the algebraic-triviality assump-tion, and the same assumption provides an especially natural candidate for the dimension functionδ – the “algebraic” dimension d of Definition 3.2.3. Subsequently, we may restrict our attention justto functions m (MS-measures) that form measuring functions when combined with this d, and wecan exploit the dearth of definable functions to formulate a version of MS-measurability that isparticularly easy to verify, as in Definition 3.2.4.

Definition 3.2.3. For C ⊂fin M and a ∈Mn (0 < n < ω), we define

I(a/C) = {i < n : ai /∈ C} ,

E(a/C) ={(i, j) ∈ I(a/C)2 : ai = aj

}, an equivalence relation

d(a/C) =∣∣I(a/C)/E(a/C)

∣∣.We observe that

d(ab/C) = d(a/C) + d(b/Ca)

whenever C ⊂fin M and a, b ∈M<ω. For a definable set X ⊆Mn, we define

d(X) = max {d(a/C) : a ∈ X, X is over C}

Definition 3.2.4. We write S[M] for the set of all types tp(a/C), where C ⊂fin M and a ∈M<ω

is a non-empty. An MS-measure onM is a map m : S[M]→ [0,∞) satisfying the following:

MS-1. m is Aut(M)-invariant

MS-2. If a ⊆ C, then m(a/C) = 1.

11

Page 12: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

MS-3. If a * C, then m(a/C) > 0.

MS-4. For C0 ⊆ C ⊂fin M and a ∈M<ω,

m(a/C0) =∑i<k

m(ai/C)

where a0, ..., ak−1 is any set of ≡C-representatives of {a′ : a′ ≡C0 a ∧ d(a′/C) = d(a/C0)}.

MS-5. Fubini:

(a) Let C ⊂fin M , 0 < n < ω, and a ∈Mn, and let σ ∈ Sym(n). Then m(a/C) = m(aσ/C)where aσ = (aσ(0), ..., aσ(n−1)).

(b) Let C ⊂fin M and a, b ∈M<ω; then m(ab/C) = m(a/C)m(b/Ca).

An MS-measure m : S[M]→ [0,∞) is called probabilistic if the following holds:

Let C ⊂fin M and 0 < n < ω, and let π(x) be a quantifier-free-complete n-type over C in thelanguage of equality such that π � “x * C”. Then∑

p∈Sπ(C)

m(p) = 1

where Sπ(C) = {p ∈ Sn(C) : π ⊆ p}.

We write MeasMS(M) for the set of all probabilistic MS-measures onM.

We have suggested strongly that the presence of MS-measures amounts to MS-measurability,so of course, we need to verify this claim. We remark that there is no need to restrict to probabilisticMS-measures at this point in the discussion, but it is not altogether clear that there are any non-probabilistic MS-measures.

Proposition 3.2.5. If m : S[M] → [0,∞) is an MS-measure on M, then M is MS-measurablevia (d,m∗), where for a definable set X, we define SX(C) = {tp(a/C) : a ∈ X, X is over C} and

m∗(X) = sup

{∑p∈SX(C)

m(p) : X is over C}.

Proposition 3.2.6 (Hill-Kruckman [14]). APMs with independent sampling and probabilistic MS-measures correspond in the following way.

1. Let a be a positive APM for K with independent sampling. Define ma : S∗[M]→ [0, 1] by setting

ma(a/C) = a(M�Ca)a(M�C) for all C ⊂fin M and non-repeating a ∈ M<ω disjoint from C. Then ma

extends uniquely to a probabilistic MS -measure.

2. Let m : S[M]→ [0, 1] be a probabilistic MS -measure. Define am by setting

am(M�A) = m(a0)m(a0/a1) · · ·m(an/a<n)

for A = {a0, ..., an} ⊂fin M , and closing to enforce invariance. Then, this definition is well-defined on finite structures (i.e. the ordering of a0, ..., an does not matter), and am is a positiveAPM with independent sampling.

12

Page 13: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

3.3 Invariant measures

One of the results that animates our proof of Theorem 4.2.1 is the following. We apply Theorem3.3.1 only for the special case of ℵ0-categorical theories, so we will not explicate all of its movingparts in general.

Theorem 3.3.1 ([2]). For a countable relational language, let StrL be the space of all L -structuresN with universe ω. For N0 ∈ StrL , the following are equivalent:

1. N0 has trivial group-theoretic algebraic closure.

2. There is a Sym(ω)-invariant Borel probability measure µ on StrL concentrated on the isomorphism-type of N0 (i.e. µ(N0/∼=) = 1).

We now express the technology of Theorem 3.3.1 for the special case that interests us. (Inparticular, since we are working with ℵ0-categorical theories, logical and group-theoretic algebraicclosure operators are identical, and there is no need to concern ourselves with Lω1,ω.)

Definition 3.3.2. X is the space of all L -structuresM′ with universe M such that age(M′) ⊆ K,and K[M ] is the set of all A ∈ K whose universes are subsets of M . As usual, if A ∈ K[M ], then[A] = {M′ ∈ X : A <M′}, and X is the Stone space with {[A] : A ∈ K[M ]} as a sub-base of clopensets. (B is the boolean algebra of subsets of X so that X = S(B) up to homeomorphism.) Obviously,Sym(M) acts on X by homeomorphisms and compatibly on B by automorphisms.

An invariant measure on X is a Borel probability measure µ : Bor(X)→ [0, 1] such that µ(σX) =µ(X) whenever X ∈ Bor(X) and σ ∈ Sym(M). For M′ ∈ X, we identify M/∼=, the isomorphismtype ofM′, with its Sym(M)-orbit, and we say that µ is concentrated onM′ if µ(M′/∼=) = 1.

Remark 3.3.3. By the Hahn-Kolmogorov Theorem again, invariant measures and APMs are essen-tially the same thing. Clearly, if µ : Bor(X) → [0, 1] is an invariant measure, then m = (ms)s isan APM for K, where ms(A) = µ([A]) for all s ⊂fin M and A ∈ Ks. Conversely, suppose m is anAPM for K. Then we define a “pre-measure” µ0 : B → [0, 1] by setting µ0([A]) = ms(A) whenevers ⊂fin M and A ∈ Ks, and extending to enforce finite-additivity in the natural way.4 Then werecover a σ-additive measure µ : Bor(X)→ [0, 1] extending µ0.

In the proof of Theorem 4.2.1, we will need to address invariant measures on some other spacesbesides X. We introduce those spaces (and the meaning of “invariance” for each) in the next todefinitions.

Definition 3.3.4. Fix p(x) ∈ Sk(TK) such that p(x) �∧i<j xi 6= xj . Xp is the set of all L p-

structures N with universe M such that Age(N ) ⊆ Kp. Kp[M ] is the set of all A ∈ Kp whoseuniverses are subsets of M . As usual, if A ∈ Kp[M ], then [A] = {N ∈ Xp : A < N}, and Xp isthe Stone space with {[A] : A ∈ Kp[M ]} as a sub-base of clopen sets. Bp is the boolean algebraassociated with Xp – i.e. the boolean algebra generated by {[A] : A ∈ Kp[M ]}. Invariant measureson Xp are defined as expected.

We note that for C ∈ K[M ], the set {N ∈ Xp : C ≤ N �L } is also a clopen set; we denote thisset by [C] and just hope that this will not cause any confusion.

4“µ0 is a pre-measure” means that if X0, ..., Xn, ... are pairwise disjoint sets in B such that⋃nXn is also in B,

then κ(⋃nXn) =

∑n κ(Xn).

13

Page 14: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

Not all invariant measures are created equal. In particular, the ergodic invariant measures havea particularly close relationship with the asymptotics of finite structures, and we will make essentialuse of this fact later on. For now, we just introduce the definition of “ergodic” in this context andsome of its equivalents. The theorem expressing the equivalence is given here, but with the readersindulgence, we defer discussion of convergent sequences of finite structures to Subsection 4.3, whenwe actually use this formulation.

We remark that “dissociated” invariant measures constitute a weaker variant of APMs withindependent sampling. In general, this appears to be a strict weakening, but we will see (Theorem??) that given 3-DAP all dissociated invariant measures concentrated on TK have independentsampling.

Definition 3.3.5. Let µ : Bor(X)→ [0, 1] be an invariant measure.

• We say that µ is ergodic if for every Borel set X ⊆ X, if µ(X4 gX) = 0 for all g ∈ Sym(M),then either µ(X) = 0 or µ(X) = 1.

• We say that µ is dissociated if for any finite structures A,B ∈ K[M ], if A ∩ B = ∅ as sets,then µ([A] ∩ [B]) = µ([A])µ([B]).

Theorem 3.3.6 (see [18], [1] and/or [22]). Let µ : Bor(X) → [0, 1] be an invariant measure. Thefollowing are equivalent:

1. µ is ergodic.

2. µ is dissociated.

3. µ is an extreme point of C , where C is the compact convex set of all invariant measures livinginside the Banach space B of finite signed measures Bor(X)→ R under the total variation norm.

4. µ is induced by a convergent sequence of finite structures.

3.4 Measuring systems

Definition 3.4.1. Consider a family µ = (µp : p ∈ S∗(TK)) as follows:

MSys-1. For each p, µp : Bor(Xp)→ [0, 1] is an invariant measure concentrated on TKp .

MSys-2. Let p(x0, ..., xk−1) ∈ S∗(TK), and let σ ∈ Sym(k); also, let pσ be the type such thatpσ(xσ(0), ..., xσ(k−1)) = p(x). Let C ⊂fin M , q(x) ∈ S∗k(C) an extension of p, and qσ(x)such that qσ(xσ(0), ..., xσ(k−1)) = q(x). Then

µp([C+q]) = µpσ([C+qσ]).

MSys-3. For all p, all A ∈ Kp and A− = A�L , and all A− ≤ B ≤ C,

µp([A]∣∣ [B]

)= µp

([A]∣∣ [C]).

MSys-4. Let C ⊂fin M , and let a, b ∈ M<ω be non-repeating and both disjoint from C and fromeach other. Let

p = tpM(a, b), pa = tpM(a), pb = tpM(b),

14

Page 15: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

q(x, y) = tpM(a, b/C), qb(y) = tpM(b/C), qa|b(x) = tpM(a/bC).

Then identifying C and Cb withM�C andM�Cb, respectively,

µp([C+q]

∣∣ [C]) = µpb([C+qb]

∣∣ [C]) · µpa([Cb+qa|b] ∣∣ [Cb])Then µ is a called a measuring system for K.

Proposition 3.4.2. Suppose µ is a measuring system for K. Define mµ : S∗[M]→ [0, 1] by setting

mµ(a/C) = µp([C+q]

∣∣ [C])whenever C ⊂fin M , a ∈ M<ω is non-repeating and disjoint from C, p(x) = tpM(a), and q(x) =tpM(a/C). Then mµ extends uniquely to a probabilistic MS -measure.

Proof. We must first extend mµ to a map S[M] → [0, 1]. If a ⊆ C, then we set mµ(a/C) = 1 byfiat. If a * C, then:

We choose a complete subset J ⊆ I(a/C) of E(a/C)-representatives, and we set a′ =(aj0 , ..., ajk−1

), where J = {j0 < · · · < jk−1}. Then mµ(a/C) = mµ(a′/C).

Then MS-1, MS-2, and MS-3 for mµ are clear. Further, MS-4 follows immediately from the factthat each µp is a probability measure. MS-5, part (a), follows from MSys-2, and MS-5, part (b),follows from MSys-3 and MSys-4. The fact that mµ is probabilistic also follows from the fact thateach µp is a probability measure.

15

Page 16: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

4 Construction of measuring systems

4.1 An example

Let G denote the combinatorial class of all finite simple graphs in the language with one binaryrelation symbol R; thus a graph is just a model of the sentence

∀xy(R(x, y)→ (R(y, x) ∧ x 6= y)).

If A ∈ G, then e(A) = RA/2 is the number of undirected edges in A. In this context, both theergodic invariant measures and the independent APMs are well understood, as we’ll see below.

We fix a bit of notation. We write E is the set of extremal/ergodic invariant measures on X. andwe write E0 for the set of invariant measures on X that correspond to APMs that have independentsampling. Finally, we write ∆ for the set of Borel probability measures on [0, 1]. We note thatE0 ⊆ E because obviously every APM with independent sampling induces a dissociated invariantmeasure, but there are additional maps between the sets [0, 1], ∆, E0 and E :

• [0, 1]∼−→ E0 : p 7→ βp where for p ∈ [0, 1], we define βp : Bor(X) → [0, 1] on the clopen base

by settingβp([A]) = pe(A)(1− p)(

n2)−e(A)

with n = |A|.

• ∆∼−→ E : ν 7→ µν where µν : Bor(X)→ [0, 1] is defined by µν(W ) =

∫[0,1] β

p(W ) dν(p).

• ∆→ [0, 1] : ν 7→ qν =∫[0,1] p dν(p).

The first two of these, β• and µ•, are classifying maps, which allow us to think about E0 and Ein more concrete terms than their natural setting allows. (The fact that β• is a classifying mapis due to Albert [3] via Hill-Kruckman [14], and the fact that µ• is classifying seems to be dueto Petrov-Vershik [26] and examination of Aldous-Hoover-Kallenberg presentations [4, 16, 17] ofergodic invariant measures.) The value of the last map q• is that it allows us to define a mappingfrom E to E0, as expressed in the following diagram.

∆µ• //

q•��

E

��

µν_

��[0, 1]

β•// E0 βqν

Thus, while not every ergodic invariant measure on XG corresponds to an APM with independentsampling, it is yet true that every ergodic invariant measure induces an APM with independentsampling. (It’s also not very hard to verify that µν([A]) > 0 for all A ∈ G[M ], then βqν ([A]) > 0for all A ∈ G[M ].)

In this example, however, we are able to exploit the known classifying maps in our construction ofthe desired mapping E → E0. In our proof Theorem 4.2.2, we use measuring systems and convergentsequences of finite structures to circumvent the need for these classifications (which we don’t haveanyway), but the goal, and the accomplishment, is still to define a map E → E0 from the ergodicmeasures to those corresponding to APMs with independent sampling.

16

Page 17: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

4.2 Proof of the main theorem (4.2.1)

In this and the following two subsections, we complete the proof of the main theorem of this paper,which we now restate for the reader’s convenience:

Theorem 4.2.1. Let K be a combinatorial class. The following are equivalent:

1. TK is super-simple of SU -rank 1.

2. There is an asymptotic probability measure a for K with independent sampling whose almost-suretheory T a is equal to TK.

As shown in Proposition 3.4.2, the existence of a measuring system µ for K would yield aprobabilistic MS-measure, which in turn would yield 1⇒2 of Theorem 4.2.1. Thus, we need todevise a mechanism to produce such measuring systems, and the next theorem does just this.

Theorem 4.2.2. Assume K has 3-DAP. Suppose that µ : Bor(X)→ [0, 1] is some invariant measureconcentrated on TK. If µ is ergodic, then it induces a measuring system for K.

We now present the proof of Theorem 4.2.1 using Theorem 4.2.2. The proof of the latterrequires a bit more understanding of ergodic invariant measures, which occupies us in the followingtwo subsections.

Proof of Theorem 4.2.1 (using Theorem 4.2.2). 2⇒1 is precisely Theorem 1.1.1, so we just need toprove 1⇒2. So, assume TK is super-simple of SU -rank 1. To apply Theorem 4.2.2, we really justneed to verify that there are ergodic invariant measures concentrated on TK.Claim. There is an ergodic invariant measure µ : Bor(X) → [0, 1] concentrated on TK (i.e. onM/∼=).

Proof of claim. We use the notations, B and C , of Theorem 3.3.6. Further, let C1 denote the set ofall invariant measures that are concentrated onM/∼=. Once again, C1 is a compact convex subsetof B, so it is equal to the closure of the set of convex combinations of its own extreme points.Moreover, C1 is non-empty because TK is algebraically trivial (using Theorem 3.3.1, which comesfrom [2]). Let µ be an extreme point of C1; we claim that µ is also an extreme point of C , henceergodic by Theorem 3.3.6.

Let Z be the complement ofM/∼= in X. Towards a contradiction, suppose there are invariantmeasures ν1, ν2 : Bor(X) → [0, 1] and a number 0 < α < 1 such that µ = αν1 + (1 − α)ν2. Sinceµ(Z) = 0, α > 0, and 1 − α > 0, we find that ν1(Z) = ν2(Z) = 0, so ν1(M/∼=) = ν2(M/∼=) = 1,and so ν1, ν2 ∈ C1. This shows that µ is not an extreme point of C1 – a contradiction.

Now, let µ : Bor(X) → [0, 1] be an ergodic invariant measure concentrated on TK, and letq ∈ S∗1(M) be generic. By Theorem 4.2.2, we recover a measuring system µ = (µp : p ∈ S∗(TK))for K. By Proposition 3.4.2, we then recover from µ a probabilistic MS-measure m = mµ. Finally,by Proposition 3.2.6, we recover a = am, a positive APM with independent sampling, and byTheorem 3.1.2, T am = TK.

17

Page 18: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

4.3 Ergodicity and convergent sequences of finite structures

In Subsection 3.3, we presented Theorem 3.3.6 characterizing ergodic invariant measures. We de-ferred discussion of convergent sequences of finite structures, but we remedy this omission now. Aswe shall see, the essential fact about ergodic invariant measures is that they are recoverable from aprocess of sampling from finite structures.

Definition 4.3.1 (Convergent sequences of structures). For sets finite sets s, s′, we define Inj(s, s′)to be the set of injections s → s′ and inj(s, s′) = |Inj(s, s′)|. If A ∈ Ks and B ∈ Ks′ , we definet(A,B) := emb(A,B)

inj(s,s′) , where emb(A,B) = |Emb(A,B)| and Emb(A,B) is the set of embeddingsA→ B.

A sequence of structures σ = (Bn)n is convergent if for any s ⊂fin M and every A ∈ Ks, thesequence

(t(A,Bn)

)nconverges. One easily verifies that if σ is convergent, then it induces an APM

aσ by setting aσs (A) = limn→∞ t(A,Bn) for every s ⊂fin M and A ∈ Ks. The APM aσ then inducesan invariant measure νσ : Bor(X) → [0, 1]. It is also easy to verify that νσ is dissociated, henceergodic by Theorem 3.3.6.

Suppose s is a finite set, A is finite structure, and u : s → A is an injection; we then defineu−1A to be the unique structure with universe s such that u is an embedding u−1A → A. Fora quantifier-free sentence ϕ with parameters from s, we sometimes write A �u ϕ to mean thatu−1A � ϕ.

At last, we are in position to prove Theorem 4.2.2, completing the proof of Theorem 4.2.1.

4.4 Proof of the Theorem 4.2.2

In all of this subsection, we assume that TK is super-simple of SU -rank 1. This fact is used to ensurethat for each p ∈ S∗(TK), Kp is a combinatorial class, so it is actually possible to concentrateprobability measures on the models of TKp .

Let µ : Bor(X) → [0, 1] be an invariant measure concentrated on TK. We assume that µ isergodic, so it is induced by a convergent sequence of structures σ = (Bn)n – i.e. µ = νσ. We alsoset a = aσ.

For p(x0, ..., xk−1) ∈ S∗(TK), we fix distinct new elements α0, ..., αk−1 outside of M , and forany s ⊂fin M , we define sp = s ∪ {α0, ..., αk−1}. If A ∈ Ks and p(x0, ..., xk−1) ∈ S∗(A) is anextension of p, then A∗p is the unique structure with universe sp such that (A∗p)�s = A andA∗p � p(α0, ..., αk−1).

Definition 4.4.1. Let p(x0, ..., xk−1) ∈ S∗(TK). Let s0 ⊆ s ⊆ s′ ⊂fin M , A ∈ Ks, and A0 = A�s0,and let p ∈ S∗(A0) be an extension of p. Let C ∈ K . We then define,

ξa(p,A;C; s′, u) =

∣∣ {e ∈ (s′ \ s)k : C �u p(e)} ∣∣

|s′|k

for each u ∈ Inj(s′, C) such that u�s ∈ Emb(A,C).

Fact 4.4.2 ((Weak) Law of Large Numbers). For Bernoulli sequences the classical statement is thefollowing: Let Z0, Z1, ..., Zk, ... be independent and identically distributed (i.i.d.) Bernoulli randomvariables (so {0, 1}-valued) with P(Z0=1) = p. Then for every ε > 0,

limk→∞

P

(∣∣1k

∑i<k

Zi − p∣∣ ≥ ε) = 0.

18

Page 19: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

It is easy to generalize the classical statement slightly as follows: Let k0 < k1 < · · · < kn < · · ·be a sequence of positive integers, and for each n, suppose Zn:0, ..., Zn:kn−1 is an i.i.d. sequence ofBernolli random variables. Suppose limn→∞ P(Zn:0=1) = p. Then for every ε > 0,

limn→∞

P

∣∣ 1kn

∑i<kn

Zn:i − p∣∣ ≥ ε

= 0.

Lemma 4.4.3. There is a function Fa : ω×ω×(0, 1)→ ω such that for any k,m < ω and δ > 0, forany s0 ⊆ s ⊆ s′ ⊂fin M , A ∈ Ks A0 = A�s0, and p(x) ∈ S∗(A0) extending p(x0, ..., xk−1) ∈ S∗(TK),if |s| ≤ m and |s′| ≥ Fa(k,m, δ), then

limN→∞

P

(∣∣∣ξa(p,A;BN ; s′, u)− asp0

(A0 ∧ p(α, a)

)as0(A0)

∣∣∣ ≥ δ ∣∣∣u�s ∈ Emb(A,BN )

)= 0.

Proof. It is enough to prove that for fixed δ > 0, s0 ⊆ s ⊂fin M , A ∈ Ks A0 = A�s0, andp(x) ∈ S∗(A0) extending p(x0, ..., xk−1) ∈ S∗(TK), the following holds: For every ε > 0, there issome nε < ω such that if |s′| ≥ nε, then

limN→∞

P

(∣∣∣ξa(p,A;BN ; s′, u)− asp0

(A0 ∧ p(α, a)

)as0(A0)

∣∣∣ ≥ δ ∣∣∣u�s ∈ Emb(A,BN )

)≤ ε.

Let e0, e1, ..., en, ... be an enumeration of the non-repeating k-tuples from M \ s, and for each n, letwn = s ∪

⋃i<n ei. For n,N < ω, we define random variables

XN :n:0, ..., XN :n:n−1 : Inj(wn, BN ) −→ {0, 1}

YN :n:0, ..., YN :n:n−1 :wnBN −→ {0, 1}

as follows:

XN :n:i(u) =

{1 if BN �u p(ei)0 otherwise

YN :n:i(u) =

{1 if BN �u p(ei)0 otherwise

Abusing notation, we write A to stand for the event “u�s ∈ Emb(A,BN )” and A0 to stand for theevent “u�s ∈ Emb(A0, BN )” in all circumstances. Clearly,

asp0

(A0 ∧ p(α, a)

)ms0(A0)

= limn→∞

(limN→∞

P(XN :n:0=1|A0)

),

so to prove the lemma, it suffices to show that for every ε > 0, there is some nε such that if n ≥ nε,then

limN→∞

P

(∣∣∣ 1n

∑i<n

XN :n:i − P(XN :n:0=1|A0)∣∣∣ ≥ δ ∣∣∣A) ≤ ε.

The proof of this fact amounts to a few straightforward observations and an application of the Lawof Large Numbers for a sequence of Bernoulli random variables.

19

Page 20: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

Observation 1 : For each n, we have

limN→∞

∣∣ {u ∈ Inj(wn, BN ) : u�s ∈ Emb(A,BN )}∣∣∣∣ {f :wn→BN : f�s ∈ Emb(A,BN ), f is 1:1}∣∣ = 1

and similarly with A0 in place of A.

Observation 2 : For every ε > 0, there is an nε such that if n ≥ nε, then

limN→∞

P

(∣∣∣ 1n

∑i<n

XN :n:i −1

n

∑i<n

YN :n:i

∣∣∣ ≥ ε ∣∣∣A) = 0.

Observation 3 : For all n < ω, limN→∞∣∣P(XN :n:0=1 |A0)− P(YN :n:0=1 |A0)

∣∣ = 0.

Observation 4 : For all n < ω, limN→∞∣∣P(YN :n:0=1 |A)− P(YN :n:0=1 |A0)

∣∣ = 0.

Observation 5 : For all N,n, the conditioned random variables YN :n:0|A, ..., YN :n:n−1|A are i.i.d.Bernoulli random variables.

Observation 1 is enforced by the fact that a uniformly random function wn → BN is injectivewhen N (hence |BN |) grows while n is held constant, and Observations 2 and 3 follow immediatelyfrom Observation 1. Observations 4 and 5 can be verified similarly to Observation 1.

Now, by Observation 5, we may apply the Law of Large Numbers, finding that for every ε > 0,there is an nε such that

n ≥ nε ⇒ limN→∞

P

(∣∣∣ 1n

∑i<n

YN :n:i − P(YN :n:0=1 |A)∣∣∣ ≥ ε ∣∣∣A) = 0.

Given ε > 0, we apply Observations 2 through 4 and the triangle inequality to find n′ε such that ifn ≥ n′ε, then

limN→∞

P

(∣∣∣ 1n

∑i<n

XN :n:i − P(XN :n:0=1|A0)∣∣∣ ≥ δ ∣∣∣A) ≤ ε.

This completes the proof of the lemma.

For a structure C ∈ K and s ⊂fin M , consider the following random process for generatingstructures A ∈ Kp

s (where p ∈ S∗(TK)):

Draw v ∈ Inj(sp, C) uniformly at random; then put u = v�s, A− = u−1C, B = v−1C, andq(x) = qftpB(α/A−); finally, let A = A−+q.

Next, we define tp(−, C) : Kp[M ]→ [0, 1] by taking tp(A,C) to be the probability of generating Aaccording to the above process.

Lemma 4.4.4. For every s ⊂fin M , for each A ∈ Kps , the limit limN→∞ t

p(A,BN ) converges.

Proof. Immediate when we apply Lemma 4.4.3 with s0 = s.

20

Page 21: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

Now, for each p ∈ S∗(TK), we may define an APM ap for Kp, defining aps : Kps → [0, 1]

(s ⊂fin M) by setting aps (A) = limN→∞ tp(A,BN ) for each A ∈ K+

s . We define µp : Bor(Xp)→ [0, 1]to be the unique σ-additive measure induced by ap, viewed as a pre-measure. Thus, we have defineda family µ = (µp : p ∈ S∗(TK)), and just remains to show that µ is a measuring system for K.

Lemma 4.4.5. For each p ∈ S∗(TK), µp is concentrated on TKp.

Proof. Assume p = p(x0, ..., xk−1). It is sufficient to show that for any extension axiom ψ of Kp,µp({N ∈ Xp : N � ψ}) = 1.

Let ϕp = ∀x(θp0 (x)→ ∃y θp(x, y)

)be an extension axiom of Kp. So, there are non-repeating

a ∈ M |x| and b ∈ M \ a such that θp0 isolates qftpMp(a) and θp isolates qftpM

p(a, b) modulo the

universal sub-theory of TKp . Let Ap =M�a and Bp =M�ab, and let θ0(x), θ(x, y) be quantifier-free formulas that isolate qftpM(a), qftpM(a, b) modulo the universal sub-theory of TK, respectively.Thus, ϕ = ∀x (θ0(x)→ ∃y θ(x, y)) is an extension axiom of TK. Also, let q0(v), q(v, b) be the typesover A = Ap�L and B = Bp�L , respectively, such that Ap = A+q0 and Bp = B+q. Since µ isconcentrated on TK, we have µ({M′ ∈ X :M′ � ϕ}) = 1.

Let s ⊂fin M be the universe of A. Let e0, ..., ek−1 ∈ M \ s and s′ = s ∪ e. Then, sinceµ({M′ :M′ � ϕ}) = 1, there is a function f : (0, 1)→ ω such that for any ε > 0, for any r ⊂fin M ,if s′ ⊆ r and |r| ≥ f(ε), then

limN→∞

inj(r,BN )−1 ·

∣∣∣ {u ∈ Inj(r,BN ) :BN �u q0(e),∧c∈r BN 2u q(e, c)

} ∣∣∣ ≤ ε.Exchanging α for e, we see that for any ε > 0, for any r ⊂fin M , if s ⊆ r and |r| ≥ f(ε), then

limN→∞

inj(rp, BN )−1 ·

∣∣∣ {u ∈ Inj(rp, BN ) :BN �u p0(α),∧c∈r BN 2u p(α, c)

} ∣∣∣ ≤ ε.From this, it follows that µ({N ∈ Xp : N � ϕp}) = 1 – as required.

Proposition 4.4.6. µ is a measuring system for K.

Proof. There are four items to verify in the definition of a measuring system (Definition 3.4.1):

MSys-1: By Lemmas 4.4.4 and 4.4.5, for each p ∈ S∗(TK), µp is an invariant measure concentratedon TKp .

MSys-2: Let p(x0, ..., xk−1) ∈ S∗(TK), and let σ ∈ Sym(k); also, let pσ be the type such thatpσ(xσ(0), ..., xσ(k−1)) = p(x). Let s ⊂fin M and C =M�s, q(x) ∈ S∗k(C) an extension ofp, and qσ(x) such that qσ(xσ(0), ..., xσ(k−1)) = q(x).

It is clear from the construction of the ap and apσ (whence of µp and µpσ) that

µp([C+q]) = aps (C+q) = apσs (C+qσ) = µpσ([C+qσ]).

MSys-3: Let p(x0, ..., xk−1) ∈ S∗(TK), A ∈ Kp and A− = A�L , and A− ≤ C1 ≤ C2 ∈ K[M ].Let s0 ⊆ s1 ⊆ s2 be the universes of A,C1, C2, respectively. Also, let m = |s2| andp(x, a) ∈ Sk(A) such that A = A−+p. By Lemma 4.4.3, we have

limN→∞

P( ∣∣∣ξa(p, C1;BN ; s

′, u)− ξa(p, C2;BN ; s′, u) ≥ δ

∣∣∣u�s2 ∈ Emb(C2, BN ))= 0.

21

Page 22: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

whenever |s′| ≥ Fa(k,m, δ) (any δ > 0). Then, by definition of tp and ap, it follows tht

µp([A]∣∣ [C1]

)=

aps1(A)

aps1(C1)=

aps2(A)

aps2(C2)= µp

([A]∣∣ [C2]

).

MSys-4: Let s ⊂fin M , and let a, b ∈ M<ω be non-repeating and both disjoint from C =M�s andfrom each other. Let

p = tpM(a, b), pa = tpM(a), pb = tpM(b),

q(x, y) = tpM(a, b/C), qb(y) = tpM(b/C), qa|b(x) = tpM(a/bC).

Also, let s′ = s∪ b and Cb =M�s′. Then, by conditioning in finite probability spaces, wehave

aps (C+q)∑C′�L=C aps (C ′)

=apbs (C+qb)∑C′�L=C apbs (C ′)

apas′ (Cb+qa|b)∑D�L=Cb a

pas′ (D)

It follows that

µp([C+q]

∣∣ [C]) = µpb([C+qb]

∣∣ [C]) · µpa([Cb+qa|b] ∣∣ [Cb]).

22

Page 23: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

5 Further questions and results on pseudo-finiteness

5.1 Approximation and the finite sub-model property

In [20], Kruckman uses a certain notion of filtration (or approximation) of a Fraïssé class by sub-classes to demonstrate that the theories T ∗feq and T ∗cpz are pseudo-finite (in fact, that they have thefinite sub-model property). Those sub-classes, of course, must induce theories that have the finitesub-model property, and to ensure this, Kruckman requires that (up to an expansion-by-definitions)these sub-classes have n-DAP for all n. Because of this requirement, the framework of [20] excludesthe theory TG3:4 of the generic tetrahedron-free 3-hypergraph (itself or as an aprroximation). UsingTheorem 4.2.1, we now prove a generalization, Theorem 5.1.3, of main general theorem of [20].First, of course, we must formalize the notion of approximation in play.

Definition 5.1.1. LetM be the generic model of K. Let

M0 ≤M1 ≤ · · · ≤ Mr ≤ · · ·

be a chain of (non-elementary) substructures of M such that⋃rMr =M and for each r, Tr :=

Th(Mr) is an algebraically trivial ℵ0-categorical theory. For each r, let Kr be the isomorphism-closure of the set of finite induced substructures ofMr. In general, Kr need not be a Fraïssé class,but replacing Kr with the age K̃r of the morleyization ofMr, we find that K̃r is a combinatorialclass. Under these circumstances and provided that TK =

⋂r Tr, the family (Kr)r is now called

a filtration of K . By extension, the family (Mr)r is a filtration of M, and the family (Tr)r is afiltration of TK.

We will say that TK is a simply-approximable theory if there is a filtration (Kr)r of TK such thatfor each r, TKr is super-simple of SU -rank 1.

Fact 5.1.2. If TK is an almost-sure theory, then it has the finite sub-model property.

Theorem 5.1.3. If TK is simply-approximable, then TK has finite sub-model property.

Proof. Let (Kr)r be a filtration of K such that each TKr (r < ω) is super-simple of SU -rank 1. Let(Mr)r be the corresponding filtration of the generic modelM. By Theorem 4.2.1 and Fact 5.1.2,for each r, TK̃r has the finite sub-model property.

Now, let A0 ⊂fin M and ϕ ∈ TK =⋂r TKr be given. We choose r such that A0 ⊂ Mr. Since

TK̃r has the finite sub-model property we recover A ⊂fin Mr containing A0 such that M̃r�A � ϕ,and since ϕ ∈ L , it follows thatMr�A � ϕ.

Theorem 5.1.3 (and the difficulty of concocting pseudo-finite combinatorial theories with someversion of that theorem) suggests a slightly outrageous conjecture, which would amount to a moreor less complete characterization of combinatorial theories with the finite sub-model property.

Conjecture 5.1.4. If TK has the finite sub-model property, then TK is simply-approximable.

Given Conjecture 5.1.4 and the central place of simplicity in the pseudo-finiteness phenomenonthat it suggests, we are tempted to approach another question posed by Cherlin:

Conjecture 5.1.5. Let H be the combinatorial class of finite triangle-free graphs, so that TH is thetheory of the Henson graph. It’s very well-known that TH is not simple – we conjecture that TH isnot even simply-approximable.

23

Page 24: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

5.2 Uncountably many pseudo-finite countable ℵ0-categorical structures

In Problem A of [6], Cherlin asks whether, for ℵ0-categorical theories, the pseudo-finiteness phe-nomenon is very rare or very common. In the following theorem, we resolve this question.

Theorem 5.2.1. In a particular finite relational language L , there are 2ℵ0 pairwise inequivalentpseudo-finite ℵ0-categorical theories. More precisely, there are 2ℵ0 pairwise inequivalent almost-surecombinatorial theories.

It is convenient to narrow the class of theories in question. Hence, we restrict attention to classesof finite structures that very much like regular hypergraphs.

Definition 5.2.2. Let L be a finite relational language. For each n-ary relation symbol R of L ,let ϕR be the sentence,

∀x(R(x)→

∧i<jxi 6= xj

)∧ ∀x

(R(x)→

∧g∈Sym(n)

R(xg(0), ..., xg(n−1)

).

In the terminology of [25], an L -structure is a society just in case it is a model of {ϕR : R ∈ sig(L )}We write S(L ) for the class of all finite L -societies (which is clearly a combinatorial class); whenL is clear from context, we will write S in place of S(L ).

For k ≥ 2, a finite L -society A is called k-irreducible if |A| ≥ 2 and for any a0, ..., ak−1 ∈ A,there are R ∈ sig(L ) and b ∈ RA such that ai ∈ b for each i < k. It’s not difficult to see that if Ais (k + 1)-irreducible, then it is k-irreducible as well.

For F ⊂ S, we write SF to denote the class of structures B ∈ S such that for no A ∈ F isthere an injective homomorphism A → B. A base is a set F ⊆ S such that (i) every A ∈ F is2-irreducible, and (ii) for all A,B ∈ F , if A . B, then A ∼= B. (Here, A . B means that there isan injective homomorphism A→ B.)

Under somewhat different terminology and notation, Propositions 5.2.3 and 5.2.4 and Corollary5.2.5 below are due to [11], where the existence of the Henson graph and similar structures was firstdemonstrated.

Proposition 5.2.3. Fix a finite relational language L , and let F ⊂ S. If every A ∈ F is 2-irreducible, then SF is a combinatorial class.

Proposition 5.2.4. Fix a finite relational language L , and suppose F1,F2 ⊂ S = S(L ) are bases.The following are equivalent:

1. F1 ⊆ F2 up to isomorphism.

2. SF1 ⊇ SF2.

3. There is an embedding M2 →M1, where M1,M2 are the generic models of SF1 ,SF2, respec-tively.

Corollary 5.2.5. Fix a finite relational language L , and suppose F1,F2 ⊂ S = S(L ) are bases.Then TSF1

= TSF2if and only if F1 = F2 up to isomorphism.

The following theorem of [7] will allow us to generate a plethora of combinatorial classes withsuper-simple generic theories.

24

Page 25: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

Theorem 5.2.6. Fix a finite relational language L , and let F ⊂ S. Assume every A ∈ F is2-irreducible. Then TSF

is super-simple of SU -rank 1 if and only if every A ∈ F is 3-irreducible.

Let L be the relational language show signature consists of the binary relation symbols R,S0, S1and 3-ary relation symbols Q0, Q1, H. Let V = {w ∈ {0, 1}<ω : |w| ≥ 1}. For w ∈ V, we define anL -society Aw as follows:

• As a set, Aw consists of 2n+ 2 distinct elements a0, ..., an−1, b0, ..., bn−1, and c, c′.

• QAw0 ={{a0, b0, c

}}, QAw1 =

{{an−1, bn−1, c

′}}, and HAw =(Aw3

), the set of all 3-element

subsets of Aw.

• RAw = {{ai, ai+1}, {bi, bi+1} : i < n− 1}, SAw0 = {{ai, bi} : wi = 0}, and SAw1 = {{ai, bi} : wi = 1}.

Observation 5.2.7. Because of how the symbol H is interpreted, Aw is 3-irreducible for every w ∈ V.

Lemma 5.2.8. For every subset W ⊆ V, F (W ) = {Aw : w ∈W} is a base.

Proof of Theorem 5.2.1. For each W ⊆ V, let TW := TSF(W ). For distinct W1,W2 ⊆ V, TW1 6≡ TW2

by Lemma 5.2.8 and Corollary 5.2.5, so {TW :W ⊆ V} has cardinality 2ℵ0 . By Theorem 5.2.6and Observation 5.2.7, each TW is super-simple of SU -rank 1, hence an almost-sure theory byTheorem 4.2.1. Thus, {TW :W ⊆ V} is a set of continuummany distinct pseudo-finite ℵ0-categoricaltheories.

25

Page 26: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

References

[1] Nathanael Ackerman, Cameron Freer, Alex Kruckman, and Rehana Patel. Properly ergodicstructures. Preprint: http://pages.iu.edu/∼akruckma/properlyergodic.pdf, 2017.

[2] Nathanael Ackerman, Cameron Freer, and Rehana Patel. Invariant measuresconcentrated on countable structures. Forum of Mathematics, Sigma, 4:DOI:https://doi.org/10.1017/fms.2016.15, 2016.

[3] Michael Albert. Measures on the random graph. Journal of the London Mathematical Society,50(2):417–429, 1994.

[4] David J. Aldous. Representations for partially exchangeable arrays of random variables. Journalof Multivariate Analysis, 11(4):581 – 598., 1981.

[5] John T. Baldwin and Saharon Shelah. Randomness and semigenericity. Transactions of theAmerican Mathematical Society, 349(4):1359–1376, 1997.

[6] Gregory Cherlin. Proceedings of the International Conference on Algebra Dedicated to theMemory of A.I. Malcev, volume 131 of Contemporary Mathematics, chapter Combinatorialproblems connected with finite homogeneity, pages 3–30. American Mathematical Society, 1992.

[7] Gabriel Conant. A note on simple Fraïssé limits. (unpublished).

[8] Richard Elwes and Dugald MacPherson. Model Theory with Applications to Algebra and Analy-sis, volume 2 of London Mathematical Society Lecture Note Series, chapter A survey of asymp-totic classes and measurable structures, pages 125–160. Cambridge University Press, 2008.

[9] Ronald Fagin. Probabilities on finite models. Journal of Symbolic Logic, 41(1):50–58, 1976.

[10] Y. V. Glebsky, D. I. Kogan, M. I. Liogonky, and V. A. Talanov. Rank and degree of realizabilityof formulas in the restricted predicate calculus. Kibernetika, 2:17 – 28, 1969.

[11] C. Ward Henson. Countable homogeneous relational structures and ℵ0-categorical theories.Journal of Symbolic Logic, 37(3):494–500, 1972.

[12] Cameron Donnay Hill. A characterization of tameness of almost-sure theories of Fraïssé classes.Submitted (Fundamenta Mathematicae), 2017.

[13] Cameron Donnay Hill. On 0,1-laws and asymptotics of definable sets in geometric Fraïsséclasses. Fundamenta Mathematicae (to appear), 2017.

[14] Cameron Donnay Hill and Alex Kruckman. Measures, measures, and measures. (in prepara-tion), 201X.

[15] Wilfrid Hodges. Model Theory, volume 42 of Encyclopedia of mathematics and its Applications.Cambridge University Press, 1993.

[16] Douglas N. Hoover. Relations on probability spaces and arrays of random variables. Preprint,Institute for Advanced Study, Princeton, NJ, 1979.

26

Page 27: On 0,1-laws and super-simplicitycdhill.faculty.wesleyan.edu/files/2018/04/Almost... · struction of useful asymptotic probability measures using nothing more than super-simplicity

[17] Olav Kallenberg. Symmetries on random arrays and set-indexed processes ,. Journal of Theo-retical Probability. 5, 5(4):727 – 765, 1992.

[18] Olav Kallenberg. Probabilistic symmetries and invariance principles. Probability and its Ap-plications. Springer-Verlag, 2005.

[19] P. G. Kolaitis, H. J. Prömel, and B. L. Rothschild. Kl+1-free graphs: Asymptotic structureand a 0,1-law. Transactions of the American Mathematical Society, 303(2):637 – 671, 1987.

[20] Alex Kruckman. Disjoint n-amalgamation and pseudofinite countably categorical theories.Notre Dame Journal of Formal Logic (to appear), arXiv:1510.03539v1, 2016.

[21] C. Laskowski. A simpler axiomatization of the shelah-spencer almost sure theories. IsraelJournal of Mathematics, 161(157 – 186), 2007.

[22] László Lovász. Large Networks and Graph Limits, volume 60 of Colloquium Publications. Amer-ican Mathematical Society, 2010.

[23] Dugald Macpherson and Charles Steinhorn. One-dimensional asymptotic classes of finite struc-tures. Transactions of the American Mathematical Society, 360(1):411–448, January 2008.

[24] David Marker. Model Theory: An Introduction, volume 217 of Graduate Texts in Mathematics.Springer, 2002.

[25] Jaroslav Nešetřil and Vojtech Rödl. The partite construction and Ramsey set systems. DiscreteMathematics, 75(1–3):327–334, 1989.

[26] Fedor Petrov and Anatoly Vershik. Uncountable graphs and invariant measures on the set ofuniversal countable graphs. Random Structures and Algorithms, 37(3):389–406, 2010.

[27] Saharon Shelah and Joel Spencer. Zero-one laws for sparse random graphs. Journal of theAmerican Mathematical Society, 1(1):97–115, 1988.

[28] Frank Wagner. Simple Theories, volume 503. Kluwer Academic Publishers, 2000.

27