Hard Core via PCA: Entropy Bounds

. We establish bounds for the entropy of the Hard Core Model/ Independent Sets on a few 2-d lattices. Our PCA-based sequential ﬁll-in method yields an increasing sequence of lower bounds for the topological entropy. Additionally the procedure gives some insight on the support of the measure of maximal entropy. The method also applies to other lattices and models with appropriate sublattice splitting.


Introduction
The Hard Core Model/Golden Mean Subshift/Independent Sets is a highly useful model in various disciplines as witnessed by its many appearances under distinct names in fields like Statistical Mechanics/Symbolic Dynamics/Theoretical Computer Science respectively. Simply described, one distributes 0's and 1's on each vertex of a given graph and requires an exclusion rule to hold everywhere: no two 1's can be nearest graph neighbors ( [1]).
On a k-partite graph it is natural to associate with it a probabilistic cellular automaton (PCA) which updates each of the k subgraphs from the others with a local rule as follows: in an all-0 neighborhood update the center vertex to 1 with probability u ∈ (0, 1), otherwise update to 0. Running a sequential update with the PCA (from any input) through the subgraphs yields eventually any Hard Core configuration with positive probability (and no others).
The PCA behavior is of interest for different u-values. If u is near 1, we are in the high density regime and the characterization of the allowed Hard Core configurations is essentially a packing problem (studied e.g. in an earlier paper [6]). In the dilute case i.e. u is near 0, the configurations are essentially Bernoulli(ũ)-fields (sometimes called Hard Square Gas), 0 <ũ < u.
Here we concentrate on the intermediate entropic regime. The outstanding open question near u = 1/2 is the exponential size of the configuration space. In almost all two or higher dimensional set-ups the exact answer is unknown. We try to alleviate the situation a little bit by establishing a procedure to estimate the entropy from below and to characterize the typical configurations. For simplicity we restrict here the graphs to be 2-d lattices. This facilitates comparison to entropy approximations elsewhere via different methods (e.g. [2], [4], [7], [8], [9]). Our ideas also generalize to more complicated and higher dimensional set-ups. For a general percolation approach to Hard Core see e.g. [3].

Set-up
Let L be a 2-dimensional lattice. Subsequently we will consider mostly one of the regular lattices (square (Z 2 ), honeycomb (H) or triangular (T)). In a few cases we illustrate the principles developed on more exotic stages like square lattice with the Moore neighborhood (Z 2 M, eight nearest Euclidean neighbors, a non-planar graph) or the Kagomé lattice (K). Our method is not intrinsically 2-d but for understanding the clear geometry of 2-d serves best.
A configuration on L satisfying the Hard Core Rule is an element in X = {0, 1} L where no two 1's can be nearest neighbors. This rule can naturally be viewed as a zero range infinite repulsive potential i.e. a hard exclusion rule not unlike that in hard sphere packing. Call the collection of configurations X hc L . The exclusion rule naturally imposes a sublattice split on L if it is a kpartite graph. For example on Z 2 , a bipartite graph, one can man all sites on 2Z 2 (Z 2 rescaled by √ 2 and rotated by 45 • ) with 1's and the rest of Z 2 must then be all 0's. Call the former the even sublattice, L e and the latter the odd sublattice, L o (it is a (1/2, 1/2)-shifted copy of the former). In rendering these we will present the even/odd sublattices as circle/dot sublattices. In a similar fashion H splits into two identical sublattices and T (a tripartite graph) into three "thinned" copies of T. Both in the dense packing regime of [6] and in the the entropic regime of this paper, this splitting will be highly relevant.
Let X 0 ⊂ X = {0, 1} L . | · | means the cardinality and x| A means the restriction of x to the set A. The standard measure of richness of the configuration set is where |A n | = n and the sequence {A n } grows in a sufficiently regular fashion.
Remark. For the full shift on any lattice h top X = ln 2 (indicating two independent choices per lattice site). If L = Z the Hard Core model is explicitly solvable and a standard transfer matrix argument implies that h top = ln (e.g. [11]). For two and higher dimensional lattices the matrix argument breaks down and the exact value of the Hard Core topological entropy remains an unsolved problem except for T (see [1]). In this paper we try to approach and in particular approximate it in a novel way.
From the general theory of lattice dynamical systems ( [11]) it is known that shift invariant probability measures on a space of configurations, M, satisfy the maximum principle, h top = sup M h µ , where h µ is the measure-entropy. The special measures yielding the equality are measures of maximal entropy. For two and higher dimensional systems they are in general not unique. In all our cases they are believed to be so, but we do not actually need to know this.
For the idea of our approach recall that given the entropy function H(P) = − ∑ µ(P i ) ln µ(P i ) for a partition P, it holds H(P ∨ P ′ ) = H(P) + H(P ′ |P). In the case of Z let P (e) = {{x|x 0 = i}} and P (o) = {{x|x 1 = i}} be the (generating) partitions for the even/odd sublattice for the left shift σ. P = P (e) ∨ P (o) generates on Z and the conditional entropy formula above implies . (1.1) The formula readily generalizes to Z d -actions, to multiple sublattices etc. The last expression is trivial only in the independent case (equals . It codes the additional entropy to be gained from the odd sublattice, given the even. In the case of Hard Core it is manageable since 1. the rule is of range one and 2. the hard exclusion greatly simplifies the computation of the conditional probabilities involved. This will become much more transparent in the subsequent analysis.

Lower bounds
We now proceed to establish lower bounds for the topological entropy using the sublattice partition representation (1.1) and a sequential fill-in scheme to circumvent the dependencies. To keep the ideas clear we first present them in the bipartite the case and then comment on the k-partite cases.
Let N e denote an all-0 nearest neighbor neighborhood of a site on the odd lattice in the even lattice. In the case of Z 2 lattice the sites in N e form the vertices of an even unit diamond, ♢ e (♢ o ). On the honeycomb and triangular lattices these sites form triangular arrangements, △ or ▽ or a hexagon.
It will become quite useful to think the fill-in in terms of forming a tiling. The pieces are 0/1-tiles which in Z 2 case are either 0/1-diamonds (as above) depending on whether the center site carries 0 or 1. On the hexagonal and triangular lattices the tiles are 0/1-(unit) hexes. Once a sublattice is chosen, one can tile the plane using any combination of 0/1-tiles centered on the sublattice.
Recall that the Bernoulli measure with parameter p, B(p), assigns 1's independently with probability p to each (sub)lattice site and 0's otherwise. Its entropy, denoted by h B (p), is −p ln p − (1 − p) ln (1 − p). Proposition 1. The topological entropy of the hard core model on a lattice with a two-way sublattice split is given by where h (e) is the entropy of the measure of maximal entropy computed from the even sublattice alone.
Proof. We attain the maximum entropy by first assigning the marginal of the measure of maximal entropy to the even lattice and then filling in the nonblocked sites on the odd lattice. These are centered at the even unit diamonds, which have probability P (N e ). B(1/2) is the maximal entropy measure among B(p), hence this on the allowed odd sites and the factor h B (1/2) = ln 2. QED The principle in the Proposition can be directly applied to square and honeycomb lattices. A further argument is required to cover all regular lattices. In the following result we present these arguments and further extend to Kagomé lattice, K (a tripartite graph), as well as to the square lattice with Moore neighborhood, Z 2 M (eight nearest neighbors in Euclidean metric, a 4-partite graph).

Theorem 1.
The topological entropy of the hard core model is bounded from below on the square (m = 4) and honeycomb (m = 3) lattices by on triangular (m ′ = 3) and Kagomé (m ′ = 2) lattices by and on Z 2 M lattice by Proof. The lower bounds (2.2) follow simply from (2.1) by assigning B(p) to the even sublattice since then P (N e ) = (1−p) |Ne| where the exponent is the number of elements in N e in Z 2 and H respectively.
On the triangular lattice the sublattice partition is three-way. We call the parts the dot, circle and triangle sublattices. They are filled in three stages in the order • → • → ◃. See Figure 1a and b for the notation and arrangement of the sublattices in a neighborhood of a triangle site.
Suppose the three sublattices are initially all empty. First fill-in the circle lattice with B(p), hence the entropy contribution 1 3 h B(p) . Then fill-in all dot sites centered at ▽ with B(q), this implies the entropy increase 1 To update the center site which is a triangle we need to know that its value is not forced. Hence which together with the choice B(1/2) on the non-blocked dots gives (2.3). The Kagomé lattice argument is similar to the triangular one. There are three sublattices involved, all identical copies of the Kagomé, only thinned and reoriented. For the nearest neighbors of a triangle-site see Figure 1c. Again we fill in the order • → • → ◃. In the last stage the probability of the triangle site being unforced is now In the case of the square lattice with Moore neighborhood there is a four-way sublattice partitioning. We denote and fill them as follows: The two first terms of the formula (2.4) are straightforward since circles are laid independently and each dot has exactly two circle neighbors. Furthermore as above we can show that P( For the diamond site at the center of Fig. 1d to contribute to the entropy we need to know the probability that it is unforced i.e. all entries in the punctured square S rendered with dotted line in Fig 1d. are 0's: (2.6) One can compute the two probabilities in the last expression to be respectively. From these the formula in the square brackets in (2.6) can finally be simplified to the The entropy bounds in (2.2) -(2.4) can be maximized with respect to the parameters using standard optimization routines in a desktop machine.
In the square lattice case the topological entropy has been computed to a great accuracy (e.g. in [2] to some 40 decimal places) using the corner transfer matrix methods. These numerical studies attack the problem in a very different way. Our aim is not to compete in decimal count but rather present an alternative method applicable in many lattice set-ups to estimate the entropy which simultaneously yields some explicit information on the generic configurations/the measure of maximal entropy.  The measure of maximal entropy doesn't need to be unique for a 2-d lattice model but in the case of hard square gas it is. This follows from the Dobrushin criterion ( [5], [10]). Using this knowledge and the results above we now establish bounds for the density of 1's in the generic configurations. The exact value of the upper bound in the following result is in the Proof but we prefer to give the statement in this more explicit form.

Proposition 2.
In the square lattice case the density of 1's at the equilibrium is in the interval (0.21367, 0.25806).
Proof. Let ρ e be the density of 1's on the even lattice and let c denote the expected number of 0's that a 1 forces on the odd lattice. Since exactly half of the non-forced sites will be 1's it must by the uniqueness of the measure of maximal entropy hold that (2 + c)ρ e = 1. Hence under it and P (♢ e ) = 2 2 + c on both lattices. The last one is due to exactly half of ♢ e giving rise to all the 1's on the other sublattice. (0-diamond as defined in the beginning of the section).
The entropy of any distribution on the even lattice with 1-density ρ e is bounded from above by the entropy of the Bernoulli distribution with parameter ρ e . Hence the total entropy at that 1-density is bounded from above by ) .
This expression bounded by h top of Table 1 ( [2] or [9]) yields an upper bound for c, 2.6801 which in turn gives the lower bound for ρ e . The upper bound for ρ e follows from a lower bound for c which we establish using a monotonicity argument. The 1's on say the even lattice are B(1/2)distributed on the non-forced sites. Call this set F. Pick a site in it which has symbol 1. How many sites will this entry block? Let F ′ be a superset of F. Then clearly E(c| F ) ≥ E(c| F ′ ) as in a bigger domain the 1 is more likely to share the blocking with a nearest neighbor 1 on the same sublattice. Hence a lower bound is obtained by calculating the blocking for a 1 with its eight nearest neighbors also in F . Enumerating the 2 8 possible neighborhood configurations and weighting them uniformly according to the B(1/2)-distribution we get the lower bound for c: 15/8. This in turn implies the upper bound for ρ e , 8/31. QED Since our first estimate for the lower bound on Z 2 is associated with densities incompatible with Proposition 2 we will try out a symmetric variant of the theme. The (near) equality of the densities on the sublattices should be a natural property of a measure corresponding to a good lower bound since the measure of maximal density is believed to be unique in all our cases. In the last three cases in Table 1. the non-equality of the densities isn't far off but for the first two we present an "equalization". Proof. In the case of two sublattices after B(p) distribution of 1's on the even lattice there are a density of (1 − p) |Ne| unforcing neighborhoods on this sublattice. These have to produce the correct density of 1's on the odd lattice, hence we need the even lattice flip probability p ′ to satisfy p ′ (1 − p) |Ne| = p. QED Using Proposition 3 one can optimize the square lattice topological entropy bound to (a slightly worse value) 0.3921 at common density level 0.2015. In view of Proposition 2 this indicates that the entropy generating 1's are not yet packed in densely enough. In the case of the honeycomb lattice the corresponding values are 0.427875 at 0.2284.

Higher order blocks
To improve the entropy bounds and more importantly to get some insight into the character of the measure of maximal entropy we now consider more complicated optimization schemes involving Bernoulli-distributed blocks on sublattices.
We first illustrate the ideas on hexagonal and triangular lattices.
A three-hex is obtained by gluing together three unit hexes so that each has two joint sides. Figure 2a. illustrates three such three-hexes next to each other (for reference lattice edges are indicated as thin dotted lines in one of the unit hexes). Note that the unit tiles on each of them are all centered on the same sublattice, the circle lattice in this case (call the tile a circle three-hex). The dots of the other sublattice are all in the centers of the three-hexes or on their extremities (three of them are indicated). Three-hexes of the same orientation obviously tile the plane. Let B(p), p = (p 0 , p 1 , p 2 , p 3 ) be the Bernoulli distribution on circle three-hexes with the probability that the three-hex has exactly k 1-tiles in it in a given orientation being p k (so p 0 + 3p 1 + 3p 2 + p 3 = 1). Its entropy is then h and for the triangular lattice a corresponding bound is where p i , q ∈ (0, 1).
Proof. For the construction of the measure we will fill in the lattice in the order • → •. If the circle three-hexes are distributed Bernoulli with parameter p the entropy contribution from the circle lattice will be 1 2 1 3 h B (p) where the factors result from the sublattice density and the fact that we distribute triples. As in Proposition 1. in the next stage the maximal entropy choice for the unforced sites on the dot lattice is the B(1/2) distribution. The total density of sites available is computed at two different types of dot sites (as in Fig. 2a, the three dots indicated) and is 1 where the coefficient 2 and the power 3 follow from the fact that at two of the three dot sites three adjacent three-hexes coincide. These formulas combined and simplified yield (3.1).
On the triangular lattice a third sublattice enters and the fill-in order is then • → • → ◃. The entropy contribution from the Bernoulli circle three-hexes is now 1 3 1 3 h B (p) since each sublattice is identical, hence of density 1/3. In the second stage the unforced dot sites are filled with B(q) distribution. Their density is computed as above to be 1 , hence the entropy contribution from dot lattice will be this expression multiplied by 1 3 B(q). In the final stage the unforced triangle sites are filled by B(1/2). Their density in the full lattice is which results from considering the two different arrangements of four neighboring three-hexes as shown in Fig. 2c.  Table 1) indicating a better packing of the 1's on the sublattices. Moreover they have significantly less variation which is to be expected since the densities are equal for the measure of maximal entropy.

(top and bottom cases for the top and bottom expressions in (3.3)). The formulas merged and simplified result in (3.2). QED
Let us now return to our original motivation, the Hard Core on the square lattice. Compounding the principles above and some further ideas we will implement an increasing sequence of lower bounds converging to the topological entropy. Along the way we'll get more explicit information on the configurations favored by the measure of maximal entropy. 1-tiles in the Z 2 case are diamonds of side length √ 2 centered on either of the two sublattices. k-omino is formed by gluing together k such 1-tiles along edges. If k = n 2 and the 1-tiles are in a diamond formation we call them a n × n -blocks. There are 2 n 2 of them. The optimization results in Section 2 were for the 1 × 1-blocks.
Consider next 2×2 -blocks. There are 16 of them, but after assuming isotropy for them i.e. that blocks that are rotations of each other are distributed with equal probability (inevitable when measure of maximal entropy is unique), there are only five free parameters for Bernoulli distribution B(p) on them (p = (p 0 , p 1 , p 21 , p 22 , p 3 , p 4 ), p 0 + 4p 1 + 4p 21 + 2p 22 + 4p 3 + p 4 = 1. Here the first subindex of p refers to the number of 1's in the block and p 22 and p 21 denote the two different arrangement of two 1's in the block (side by side and across)).
The entropy contribution from the even lattice (on which we distribute first the 1's using B(p) is now The density of the unforced sites on the odd lattice can be computed from the three cases indicated in Figure 3a. and results in These combined yield a lower bound for h top , which is optimized in Table 3 (second row). In the block size 3 × 3 there are initially 511 free block probabilities to optimize. When rotational invariance is imposed the variable number is reduced and additionally we will expect blocks that are reflections of each other to have equal probabilities at the optimum. After these two types of symmetries are accounted the number of free variables will be 101. In this size and in larger blocks another feature appears which enables further variable weeding. Consider the block in Figure 3c. The symbol assignments in sites x, y and z are irrelevant in the sense that the existing 1's in the 3 × 3 -block already force all the odd sites (to carry 0's) that x, y and z might force if any of them were 1's. Hence there are 2 3 blocks of equal probability. This combined with the symmetry assumptions above yield the total of 64 blocks with identical probabilities at the optimum (this is actually the maximum reduction achievable in this block size). Combing through the set of all blocks for this feature will result in reduction by a factor about 11 to the final set of 46 variables. Their optimal values have been computed and the results are in Table 3.
Subsequently we call sites like x, y and z above weak with respect to the rest of the given block. Only the corner sites of a block cannot ever be weak.
The procedure of variable reduction is highly useful since the above rotational and reflection symmetry search as well as the weak site identification can be automated. Moreover the reduction improves significantly at every stage: for example in the next block size of 4 × 4 the initial variable number of 65.536 shrinks 66-fold to 991 final free variables.
Note also that the optima in block size n × n can be utilized as indicated in Figure 3d to initiate the search in the next larger block size. Once e.g. the 3 × 3 subblock optimum probability is known, the added half frame (e 1 , . . . , e 7 ) should be assigned B(p) entries with p computed from 3 × 3 blocks. With tailored optimization routines one should be able to deal with several thousands of variables in the larger block sizes. All the optimizations here were done with non-specialized code using Mathematica.
The optimal block probabilities satisfy a useful monotonicity property, that we establish next. For this let B i , i = 1, 2 be n × n -blocks, whose subsets of 1's we refer to as B (1) i . There is a partial order on the blocks via B (1) i using the ordinary set inclusion: j . Let the optimal probabilities for the blocks be p = (p 0 , p 1 , p 2 , p 3 , . . . , p l ), l = 2 n 2 (no reductions done yet and no particular order in the coordinates). Proof. The optimal lower bound is given by h(p) = 1 where N e is the even 2 × 2 -diamond of all 0's as in Section 2. Let B i be such that B (1) 1 ⊂ B (1) 2 and let p 1 = p + ϵ, p 2 = p − ϵ, 0 ≤ |ϵ| < p. Denote by h ϵ (p) the lower bound with the given p 1 and p 2 . To prove the result we will consider the entropy variation under the probability change of the two blocks: where P k,ϵ (N e ) and P k (N e ) are the N e -diamond probabilities computed from the different arrangements involving k = 1, 2 or 4 n × n -blocks as in Figures 3a  and b, for the block probability choices p ± ϵ or p for both.
By ln (1 + x) ≈ x the first square bracket behaves for small ϵ like contains only weak sites with respect to B (1) 1 then the blocks B i allow exactly the same sites to flip on the odd lattice hence each of the three last lines in (3.6) vanishes. The sole contribution to ∆h ϵ (p) then comes from the first square bracket and since this is negative for small but nonzero ϵ, it must be that p 1 = p 2 at the optimum.
If B (1) 1 contains non-weak sites with respect to B 1 let us first assume that they force k odd interior sites (recall that the odd sites are the vertices of the grids in Figure 3. There are (n − 1) 2 such interior sites in a n × n -block). Let m be the number non-forced odd interior sites over block B 1 . Then where the dots refer to the contributions from the other blocks. All these terms cancel out, since the other block probabilities are identical.
If non-weak sites only force odd interior sites then by geometry of the set-up the two last lines in (3.6) are immediately zero. If e extra odd edge, off-corner sites are forced, similar argument than above gives estimate (c 2 + eϵ 4(n−1) ) 2 − c 2 2 , c 2 > 0 for P 2,ϵ (N e ) − P 2 (N e ) so the next to last line in (3.6) has the first order behavior c 3 ϵ, c 3 > 0. Some added bookkeeping yields P 4,ϵ (N e )−P 4 (N e ) = (c 4 + lϵ/4) 4 − c 4 4 ≈ c 5 ϵ, c 5 > 0 (l is the number of odd corners forced). The leading orders for the terms in the square brackets in (3.6) together yield c 1 ϵ 2 + dϵ, c 1 < 0, d ≥ 0. If there are non-weak sites in B Remarks. 1. Intuitively the result says that if neither of two even blocks gives more subsequent choice on the odd lattice, for maximum entropy one should weight them equally. Otherwise one should favor the one giving more choice on the odd lattice. 2. One can readily see some chains imposed by the order in Figure 4: 0 ≺ 12 ≺ 23 ≺ 31 or 0 ≺ 11 ≺ 21/22 ≺ 33 etc. The monotonicity can be utilized in limiting the number of n × n -blocks optimized for larger values of n (dropping blocks with least probability as dictated by the Theorem and with least multiplicity (most symmetric)). The correlation structure inside the measure of maximal entropy gradually presents itself in the Bernoulli approximations when we consider higher order blocks. Correlations between the blocks are zero because of independence, but within the blocks it is worth making comparisons.
By adding the optimum probabilities of all 3 × 3 blocks at a given density level k/9 = 0, 1/9, . . . , 1 we obtain the "density profile" of this measure (here k is the number of 1's in the block).
Suppose next that we generate the 3 × 3 blocks from 1 × 1 Bernoulli entries with the appropriate optimal p for 1's (as found above). By adding these up we again obtain a density profile, this time for the 1 × 1 optimal Bernoulli measure at the resolution level of the block size 3 × 3. The 3 × 3 blocks can of course be generated using the optimal 2 × 2 blocks as well and yet another density profile results. These three discrete plots are rendered as curves in Figure 5. Perhaps the most notable feature here is the flattening of the distributions, as the block size increases i.e. the total block probabilities move towards the tails (while their means stay constant around 0.22). The curves cross between density levels 1/3 − 4/9: below this cross over the shorter range Bernoulli measures favor light 3 × 3 blocks, above it they discount heavier blocks in comparison to the optimal 3 × 3 Bernoulli measure.
When examined closer one will see that the total probability of 3 × 3 blocks at a given density level essentially comes from at most three different kinds of local configurations (up to reductions above that is). These seem to be "grown": when moving from density level d to level d + 1/9 the high probability blocks are generated by adding a (contiguous) 1 into an existing high probability block. This mechanism cannot prevail when the 3 × 3 blocks are generated independently from smaller blocks. Consequently the small block curves in Figure 5. have suppressed tails. We expect this phenomenon to prevail in the higher order Bernoulli blocks as well and thereby to be a significant feature in the long range correlations of the measure of maximal entropy.