On the smoothed complexity of convex hulls

We establish an upper bound on the smoothed complexity of convex hulls in R d under uniform Euclidean ( ‘ 2 ) noise. Speciﬁcally, let { p ∗ 1 , p ∗ 2 , . . . , p ∗ n } be an arbitrary set of n points in the unit ball in R d and let p i = p ∗ i + x i , where x 1 , x 2 , . . . , x n are chosen independently from the unit ball of radius δ . We show that the expected complexity, measured as the number of faces of all dimensions, of the convex hull of { p 1 , p 2 , . . . , p n } is O (cid:16) n 2 − 4 d +1 (1 + 1 /δ ) d − 1 (cid:17) ; the magnitude δ of the noise may vary with n . For d = 2 this bound improves to O (cid:0) n 23 (1 + δ − 23 ) (cid:1) . We also analyze the expected complexity of the convex hull of ‘ 2 and Gaussian perturbations of a nice sample of a sphere, giving a lower-bound for the smoothed complexity. We identify the diﬀerent regimes in terms of the scale, as a function of n , and show that as the magnitude of the noise increases, that complexity varies monotonically for Gaussian noise but non-monotonically for ‘ 2 noise.


Introduction
In this paper we study the smoothed complexity [9] of convex hulls, a structure whose importance in computational geometry no longer needs arguing.This smoothed complexity analysis includes two, distinct, technical difficulties.It first requires to study the average complexity of the convex hull of a random perturbation of a given, initial, point set; that is, perform average-case analysis albeit for an atypical probability distribution.It then asks to control the maximum of that expected complexity over all choices of the initial point set.We present new insights on both issues for two noise models: uniform, bounded-radius, Euclidean noise and Gaussian noise.
Motivations.Combinatorial structures induced by geometric data are some of the basic building blocks of computational geometry, and typical examples include convex hulls or Voronoi diagrams of finite point sets, lattices of polytopes obtained as intersections of sets of half-spaces, intersection graphs or nerves of families of balls...The size of these structures usually depends not only on the number n of geometric primitives (points, half-spaces, balls...), but also on their relative position: for instance, the number of faces of the Voronoi diagram of n points in R d is Θ(n) if these points lie on a regular grid but Θ n d/2 when they lie on the moment curve.A simple, conservative, measure is the worst-case complexity, which expresses, as a function of n, the maximum complexity over all inputs of size n.
For geometric structures, the worst-case bounds are often attained by generic but brittle constructions: the high complexity remains if sufficiently small perturbations are applied, but vanishes under large enough perturbations.One may wonder about the relevance of worst-case bounds in practical situations, where input points come from noisy measurements and are represented using bounded precision.
where K is some bounded domain in R d of fixed size, card(CH(X)) denotes the combinatorial complexity, ie the total number of faces of all dimensions, of the convex hull of X, and x 1 , x 2 , . . ., x n are independent random variables, usually identically distributed.The goal is to express this bound as a function of the number n of points and some parameter that describes the amplitude of the perturbations x i 's.The only examples of smoothed complexity analysis of geometric structures (rather than algorithms) that we are aware of are some aspects of random polytopes related to the simplex algorithm [9] and visibility maps on terrains [3].In this paper we consider two types of perturbation, the 2 perturbation where the x i 's are drawn independently from the ball of radius δ > 0 centered at the origin, and the Gaussian perturbation where the x i 's are drawn independently from the d-dimensional multivariate Gaussian distribution with mean vector 0 and covariance matrix σ 2 I d .We will assume that the domain K containing the initial point set is the unit ball centered at the origin, so that the ratio between the initial configuration and the perturbation is entirely contained in the perturbation parameter, δ or σ.
New results.Our first result is the following upper bound (Theorem 7) on the smoothed complexity of the convex hull under 2 perturbation: Here K is the unit ball in R d .The bound is asymptotic as n → ∞ and the constant in the O() depends on d but is independent of δ, which may vary with n.The proof essentially decomposes the initial point set into a "boundary" part and an "interior" part and controls each contribution separately.The classification is very flexible and emerges naturally from a witness-collector mechanism [4] proposed by some of the authors to measure the complexity of random geometric structures.
Going in the other direction, one may wonder which original point sets {p * i } 1≤i≤n ⊂ K are extremal for the smoothed complexity.In the plane, two natural candidates are the case where the p * i 's are all at the origin, and the case where the p * i 's form a regular n-gon inscribed in K.The former case corresponds to a classical model of random polytopes , and all bounds are for uniform 2 perturbation.A data point with coordinates (x, y) means that for a perturbation with δ of magnitude n x the expected size of the convex hull grows as n y , subpolynomial terms being ignored.The worst-case bound is given as a reference.The constants in the O() and Ω() have been ignored as their influence vanishes as n → ∞ in this coordinate system.and is well understood (see below).Experiments for the latter case suggest a surprising difference in the behaviour of 2 and Gaussian perturbations (refer to Figure 2): while for Gaussian perturbation the expected complexity consistently decreases as the amplitude of the noise increases, for 2 perturbation some non-monotonicity appears.Motivated by these observations we performed a complete analysis of the expected complexity of the convex hull of 2 perturbations of a good sample of the unit sphere and of Gaussian perturbations of a a regular n-gon.Our bounds (Theorems 10 and 13) delineate the main regimes in (δ, n) and (σ, n); they confirm the existence of the observed non-monotonicity for 2 perturbation and its absence for Gaussian perturbation, and provide a complete analysis of a candidate lower-bound for the smoothed complexity (see Figure 1).

Related work.
This work builds on a previous work by some of the authors to develop a method to derive, with minimum effort, rough estimates on the complexity of some random geometric hypergraphs [4].The smoothed complexity bound uses ingredients from that witness-collector method in a new way.Theorems 10 and 13 build on one of the case-analysis from that work, extend it to all scales of perturbation and to Gaussian noise, and dispose of extraneous log factors using an idea which we learned from [5] and systematize here (Lemma 3).
The only previous bound on the smoothed complexity of convex hulls is due to Damerow and Sohler [2].Their main insight is a quantitative version of the following intuitive assertion: if the magnitude of the perturbations is sufficiently large compared to the scale of the initial input, the initial position of the points does not matter and smoothed complexity can be subsumed by some average-case analysis (up to constant factors). 1A smoothed complexity bound then follows by a simple rescaling argument. 2 It should first be noted that the rescaling argument only applies to bound the number of vertices of the convex hull since faces of higher dimension may come from more than one cell.Next, Damerow and Sohler argue that the average-case bound controls the smoothed complexity for dominating points; in several situations the dominating points largely outnumber the extreme points, so this bound may be overly conservative.Last, our analysis gives finer bounds than the rescaling argument of Damerow and Sohler alone.Consider for instance the perturbation of the vertices of the unit-size n-gon by a Gaussian noise of standard deviation σ.The rescaling argument yields 3  .
Our work is also related to the classical question of the expected complexity of random polytopes.Starting with the seminal articles of Renyi and Sulanke [7,8] in the 1960's, a series of works in stochastic geometry led to precise quantitative statements (eg.central limit theorems) for models such as convex hulls of points sampled i.i.d.from a Gaussian distribution or the uniform measure on a convex body; we refer the interested reader to the recent survey of Reitzner [6].Our work departs from this line of research by refining the model rather than the estimates; to put it bluntly, we content ourselves with Θ()'s in place of central limit theorems but aim for analyzing more complicated probabilistic models where points are not identically distributed and laws are not given explicitly.

The Witness-Collector technique
Analyzing the smoothed complexity of convex hulls, or other geometric structures such as Delaunay triangulations, reduces to the following core problem.We are given a range space (R d , R), a finite set P ⊆ R d of random independent points, and want to estimate the expected complexity of some geometric hypergraph H = {P ∩ r : r ∈ R} induced by R on P .In plain English, a subset Q ⊂ P is a hyperedge of H if and only if there exists r ∈ R such that r ∩ P = Q.When the ranges are the half-spaces delimited by hyperplanes, the set of vertices of any k-dimensional face of the convex hull of P is an element of H of cardinality k + 1; the converse is true for the vertices (k = 0) and if it may fail for higher dimensional faces, the 1 Specifically, they show that if n points from a region of diameter r are perturbed by a Gaussian noise of standard deviation Ω(r √ log n) or a ∞ noise of amplitude Ω(r 3 n/ log n) then the expected number of dominating point is the same as in the average-case analysis. 2Split the input domain into cells of size r = O(σ/ √ log n), assume that each cell contains all of the initial point set, and charge each of them with the average-case bound. .
overcounting often turns out to be negligible.Our goal is thus to estimate the complexity of H (k+1) , the set of hyperedges of H of size k + 1.From now on we focus on bounding card(H (k) ) in the case where R is the set of half-spaces in R d .

Static witness-collector pairs
To estimate the complexity of a geometric hypergraph H (k) we follow a simple and general approach dubbed the witness-collector method.The idea is to break down R into a small number of subsets of ranges R 1 ∪ R 2 ∪ . . .∪ R m and associate to each R i two regions, a witness W i and a collector C i , with the following properties: (a) W i contains at least k points of P with high probability, (b) C i contains on average a small number of points of P , (c Condition (c) ensures that when a set W i contains at least k points of P , it witnesses that all hyperedges induced by R i are collected by C i .In particular the expected number of hyperedges of H of size k, conditioned on the event that every witness contains at least k points of P , is bounded from above by By (a), the conditioning event fails with small probability, so if that happens we can afford to use the worst-case bound card(P ) k .This bound is expressed in terms of the E card(C i ∩ P ) k whereas (b) controls E [card(C i ∩ P )]; this is not an issue as long as the position of the points are independent random variables: where the X i are independently distributed random variables with value in {0, 1} and E By a Chernoff bound, Condition (a) reduces to controlling the expectation of card(W i ∩ P ): Lemma 2 ([4, Lemma 1]).Let P be a set of n random points of R d independently distributed.If W is a region that contains on average at least k log n points of P then the probability that W contains less than k The simplest use of this approach consists in placing explicitly fixed pairs of witnesses and collectors that "cover" the distribution to analyze (see [4] for several examples).This typically results in bounds containing some extra log factors (coming from Lemma 2).

Adaptive witness-collector pairs
When using Lemma 2 to ensure Condition (a), we increase the expected size of each W i ∩ P so that all witnesses contain enough points for most realizations of P .Since we typically need that W i ⊆ C i , this also overloads the collectors and results in the extra log factors mentioned above.An idea to obtain sharper bounds, first introduced in [5], is to make W i and C i random variables depending on the random point set P .By tailoring the witness-collector pairs to each realization of the point set P , very few collectors will need to be large, and those will be negligible in the total.
More formally, we again break down R into a small number of subsets of ranges R 1 ∪ R 2 ∪ . . .∪ R m and associate to each R i a sequence {(W j i , C j i )} j≤log 2 n of witness-collector pairs.We replace (a)-(c) by the following conditions for all j: On the smoothed complexity of convex hulls ) be a range space, P ⊆ R d a set of n random, independent points, and H the hypergraph induced by R on P .Assume that R = R 1 ∪ R 2 ∪ . . .∪ R m and that for each i ∈ {1, 2, . . ., m} we have a sequence {(W j i , C j i )} j≤log 2 n of witness-collector pairs satisfying (a'), (b'), (c'), (d') and (e') for all i, j.
Proof.Let i ∈ {1, 2, . . ., m}.We let d i denote the smallest j such that W j i contains at least k points and We claim that for some λ > 0, depending only on the constant in the Ω() in (a'), we have P [d i ≥ j] = O e −λj for j ≤ log 2 n.Indeed, observe that card(W j i ∩ P ) is a sum of independently distributed random variables (one per point of P ) with values in {0, 1}.
We also claim that (b') implies that E card(C j i ∩ P ) | d i ≥ j = O(j).Indeed, working with the complement Cj i of C j i , E card( Cj i ∩ P ) = p∈P P p / ∈ C j i .For any T ⊂ P we have by (e').Thus, E card( Cj Moving back to the complement (P has n points in total), and each collector C i contains on average few points: and, by Lemma 1, the number of hyperedges is We can turn the O() of Lemma 3 into a Θ() by using an additional condition: Lemma 4. Assume that the conditions of Lemma 3 are satisfied and that the sequence

witness-collector pairs also satisfies (f')
There exist γ > 0 independent of n and I ⊆ {1, 2, . . ., m} of size Ω(m) such that We note that the extra condition of the lemma holds for the hypergraphs that we study in this paper, where the ranges are half-spaces.Indeed, here H (1) is exactly the set of vertices of the convex hull of P , and every vertex belongs to at least one k-dimensional face.

Example: application to Gaussian polygons
To demonstrate how the witness-collector technique works we give a simple proof of the two-dimensional case of a classical bound on the complexity of Gaussian polytopes [7].
We first recall a few technical properties of Gaussian distributions.The Q−function is defined as the tail probability of the standard Gaussian distribution, so if X ∼ N (0, 1) We use the following upper and lower bounds:

1
Proof.The upper bound comes from and the lower bound comes from the fact that We use the so-called Lambert function W 0 defined as the solution of the functional equation f (x)e f (x) = x [1, Equation (3.1)].Let us emphasize that for x ≥ 0 its definition is nonambiguous and satisfies [1, Equations (4.6) and (4.9)] We can now prove the announced bound: For any fixed k, the expected number of k−dimensional faces of the convex hull of P is Θ √ log n .
Proof.We break the set of half-planes R into smaller range spaces R 1 , . . ., R m by covering the space of directions, seen as the unit circle ∂B(0, 1), by circular arcs Sc 1 , . . ., Sc m of angle and inner normals u 1 , . . ., u m .We have m = Θ 1 α arcs for the cover.We construct each witness as a semi-infinite strip with inner direction u i (see the green region in the figure below).For i ≤ m and j ≤ log 2 n, the witness W j i is defined as the set of points p = x v i + y u i (where ( v i , u i ) is an orthonormal basis) such that |x| ≤ 1 and y > h (j) , where h is called the height of the witness.The collector C j i is defined as the union of the half-planes in R i that do not contain W j i (see the blue region in the figure on the right), so that Condition (c'), (d') and (e') hold.
Every point p ∈ P writes p = x i v i +y i u i with x i , y i ∼ N (0, 1) independent.Thus, the probability for p to be in 2 , so B(0, 1) and E card(W j i ∩ P ) = nΘ Q(h (j) ) = Θ(j).Condition (a') therefore holds.
To compute the expected number of points in C j i , we just compute the expected number of points in one of the extreme half-planes, see the figure above.The height of the left-hand half-plane is h = h (j) − tan α 2 cos α 2 = h (j) − O (α), so the expected number of points in the collector is bounded by 2nQ h and using Equation (1), For the lower bound, observe that for n large enough W 1 i is inside a wedge of angle O(α) from the origin so a constant fraction of the W 1 i are disjoint.Moreover, we have 1), Chernoff's bound ensures that for any 0 < β < 1 we have = ∅ is bounded from below by a positive constant.Condition (f') of Lemma 4 is thus verified and, by Lemma 4, E card(H (k) ) is also Ω(m) = Ω( √ log n).

A smoothed complexity bound for 2 perturbations
Let K x ⊆ R d denote the ball of radius x centered at the origin.We define the intersection depth of a half-space W and a ball B(p, ρ) with center p and radius ρ as ρ − d(p, W ). Let P * be a set of n points, chosen arbitrarily in K 1 and let P be a random perturbation of P * obtained by applying to each point, independently, a 2 perturbation of amplitude δ.We let H denote the geometric hypergraph induced on P by the set R of half-spaces in R d .Using the witness-collector technique we prove the following smoothed complexity bound: The bound is asymptotic, for n → ∞, and the constant hidden in the O() depends on k and d, but is uniform in δ.In particular δ can be a function of n.Before we prove Theorem 7 some remarks are in order: In dimension 2, the bound asserts that for any input in K 1 , a 2 noise of amplitude δ n −1/3 suffices to guarantee an expected sub-linear complexity.In dimension 3, the bound exceeds the worst-case bound and is thus trivial.In dimension d, for any input in K 1 a 2 noise of amplitude δ n −4/(d 2 −1) suffices to guarantee an expected sub-quadratic complexity.
Proof.We break up the set R of ranges.To that end, we consider a covering Sc 1 , Sc 2 , . . ., Sc m of ∂K 1+δ by m spherical caps of radius r = δn − 2 d+1 ; a minimal-size covering uses m = O n 2− 4 d+1 1 + 1 δ d−1 .For i ∈ {1, 2, . . ., m} we consider the set of directions of outer normals to ∂K 1+δ in a point of Sc i , and let R i denote the set of half-spaces in R d with inner normal in that set.
We next set up, for each R i , a family {(C j i , W j i )} j of witness-collector pairs.Let u i denote the normal to ∂K 1+δ in the center of the cap Sc i .Each witness W j i is a half-space with inner-normal u i whose intersection-depth with K 1+δ is set so that it contains on average j points of P .Each collector C j i is defined as the union of half-spaces with inner direction in Sc i that do not contain W j i ∩ K 1+δ .This construction readily satisfies Conditions (a'), (c'), (d') and (e').Moreover, we claim that for any perturbed point p ∈ P we have (Claim 8): and therefore that our construction also satisfies Condition (b').The statement of the theorem then follows from Lemma 3.

S o C
for any perturbed point p ∈ P .
Proof.Let p * ∈ P * and p its perturbed copy.We fix some indices 1 ≤ i ≤ m and 1 ≤ j ≤ log 2 n and write w = P p ∈ W j i and c = P p ∈ C j i .Let ν denote the volume of a (d − 1)-dimensional ball of radius 1.The volume of the intersection of a ball of radius δ with a halfspace that cuts it with depth t is ) is increasing on [0, 2δ] for any fixed δ.Moreover, for 0 < t ≤ λt ≤ 2δ we have: Refer to Figure 3-left and let h w denote the intersection depth at which W j i intersects B(p * , δ).Observe that C j i ∩ P is contained in a half-space Cj i that intersects K 1+δ with depth at most h w + h.Since the diameter of Cj i ∩ P is at most 2 + 2δ, considerations on similar triangles (see Figure 3-right) show that h ≤ 2r.If h w ≤ 2r then we obtain the first part of the announced bound on c: If h w > 2r then we can assume that c > 2w, as otherwise the claim holds trivially.In particular h w ≤ δ.For n large enough (independently of δ), we also have h < δ and the depths of intersection of both W j i and Cj i are in the interval [0, 2δ].We then have the last inequality coming from h w > 2r ≥ h.
In two dimensions, this bound can be combined with the rescaling argument of Damerow and Sohler: Corollary 9.For d = 2, the smoothed complexity of the convex hull of n points placed in the unit disk and perturbed by a 2 noise of amplitude (This bound implies that in dimension 2, for any input in K 1 , an 2 noise of amplitude δ n −1/2 suffices to guarantee an expected sub-linear complexity, improving on Theorem 7.) Proof.We cover K 1 , which contains the initial points, by Θ(1/r 2 ) cells of size r.Fix some ordering on these cells and let P i denote the set of perturbed points whose unperturbed points were initially in the ith cell.We can bound the number of vertices of the convex hull of the perturbed point set by the sum of the number of points on the convex hulls of each of the P i 's.So let n i denote the number of initial points contained in the ith cell.We apply the previous bound, noting that the scale of the initial point set was multiplied by r; since the combinatorial structure of the convex hull is unchanged by scaling, this is equivalent to multiplying the scale of the noise by 1/r.The expected number of vertices on the convex hull of P i is therefore Summing over all cells we get Recall that the n i sum to n.Using the concavity of x → x 2 3 , we have For δ ≥ 1 we use the bound O(n 2/3 ) from Theorem 7.For δ < 1 we use the previous bound with r = δ.Altogether we obtain that the expected number of vertices of CH(P

Perturbing a convex polyhedron by a uniform 2 noise
We now turn our attention to a class of configurations that are natural candidates to maximize the smoothed complexity of convex hulls in 2 and 3 dimensions.Recall that an (ε, κ)-sample of a surface is a point set such that any ball of radius ε centered on the surface contains between 1 and κ points of the set.

S o C G ' 1 5
As for Theorem 7, the bounds are asymptotic, for n → ∞ and the constants hidden in the Θ() depend on k and d, but are uniform of δ.In particular, δ can be a function of n.Before we prove Theorem 10 some remarks are in order: The first bound merely reflects that a point remains extreme when the noise is small compared to the distance to the nearest hyperplane spanned by points in its vicinity.The last bound is of the order of magnitude of the expected number of (k − 1)-faces in the convex hull of n random points chosen independently in a ball of radius δ; this confirms, and quantifies, the intuition that the position of the original points no longer matters when the amplitude of the noise is really large compared to the scale of the initial configuration.
The second and third bounds reveal that as the amplitude of the perturbation increases, the expected size of the convex hull does not vary monotonically (see Figure 2): the lowest expected complexity is achieved by applying a noise of amplitude roughly the diameter of the initial configuration.
Proof.Let h be the maximal depth at which a half-space containing k points of P on average intersects K 1+δ ; such a half-space intersects ∂K 1+δ in a spherical cap of radius We break up R in smaller range spaces R 1 , R 2 , . . ., R m by covering ∂K 1+δ by spherical caps Sc 1 , Sc 2 , . . ., Sc m of radius r, and letting R i stand for the set of half-spaces in R d with inner normal in Sc i .We need and can take m = Θ 1+δ .
Let u i denote the normal to ∂K 1+δ in the center of the cap Sc i .For j = 1, 2, . . ., log 2 n we define W j i as the half-space with inner normal u i and containing on average j points of P .We let C j i be the union of half-spaces of R i that do not contain W j i ∩ K 1+δ .As defined, these pairs of witness-collectors satisfy Conditions (a'), (c'), (d') and (e') of Lemma 3.
First we remark that it is easy to extract from the W 1 i a family of size Ω(m) such that the W 1 i ∩ P are disjoint, since W 1 i ∩ K 1+δ is seen from the origin with an angle Θ 1 m .Second, the extremal point in direction u i is in W 1 i as soon as W 1 i is non empty.Thus we have Chernoff's bound ensures that for any 0 < β < 1 we have = ∅ ≥ 0.39 and Condition (f') of Lemma 4 is verified.We claim that C j i ∩ K 1+δ is contained in the half-space D j i with inner normal u i and cutting Sc i in a cap of radius 3r j i , where r j i denotes the radius of the cap W j i ∩∂K 1+δ .Indeed, for any half-space X, the region X ∩ K 1+δ is the convex hull of X ∩ ∂K 1+δ .It follows that X ∈ R i does not contain W j i if and only if X ∩ ∂K 1+δ does not contain W j i ∩ ∂K 1+δ .This implies that for any X ∈ R i the cap X ∩ ∂K 1+δ is contained in a cap with same center as W j i ∩ ∂K 1+δ and radius 3r j i .A half-space cutting out a cap of radius r x in ∂K 1+δ intersects K 1+δ with depth h x = Θ r 2 x 1+δ .Tripling the radius of a cap thus multiplies the depth of intersection by 9. Claim 12 then implies that By Lemmas 3 and 4 we thus have .
The expressions for the various ranges of δ are then obtained by plugging the expressions for h obtained from Claim 11.
Proof.The set of points in ∂K 1 at which we can center a ball of radius δ that intersects , if h → 0, and Θ(1) otherwise.
By the sampling condition, each ball of radius n ≤ k and h = Θ n Proof.The proof of Claim 11 shows that the number of points, in all the cases, depends on a polynomial of h.Thus, multiplying the depth by 9 multiplies the expected number of points by a constant (depending only on d).

Gaussian perturbation of a polygon
We now investigate the same class of configurations as in Section 4, replacing the uniform 2 noise by a Gaussian noise.Since the calculations are more involved we only consider the two-dimensional case.Our result is the following:

S o C G ' 1 5
Proof of Theorem 13.We cover the space of directions S 1 , envisioned as the unit circle ∂B(0, 1), by circular arcs Sc 1 , Sc 2 , . . ., Sc m .Each circle arc Sc i has center u i and makes an angle α = Θ 1 m that depends on σ and n.We break up R in smaller range spaces R 1 , R 2 , . . ., R m where R i denotes the set of half-planes with inner normal in Sc i .We define the witnesses W j i 1≤j≤log 2 n and the collectors C j i 1≤j≤log 2 n with the usual goals in mind: W j i should have inner normal u i and contain Θ(j) points on average, and C j i is defined as the union of the half-spaces in R i that do not contain W j i .We first use Lemma 14 to find suitable values of h j and w j , that depend on σ and n, such that we can set W j i = W (w j , h j , u i ).We then get, again from Lemma 14, a suitable value of α that ensures that setting C j i = C(w j , h j , u i , α) satisfies our objectives.This family of witness-collectors satisfies Conditions (a')-(e') so Lemma 3 yields that E [CH(P )] = O 1 α .We now split the range of σ according to the conditions of Lemma 14 where we set j = log 2 n.Using W 0 (x) ∼ x→∞ log x we obtain three regimes: .The upper end of I 2 yields the same behaviour as I 3 , so we merge them to obtain the three regimes of Theorem 13.

Figure 1 A
Figure 1 A comparison of our smoothed complexity bound of Theorem 7 and two lower bounds, where the initial points are placed respectively at the vertices of a unit-size n-gon (Theorems 10) and in the origin.The left-hand figure is for d = 2, the right-hand figure is for d = 8, and all bounds are for uniform 2 perturbation.A data point with coordinates (x, y) means that for a perturbation with δ of magnitude n x the expected size of the convex hull grows as n y , subpolynomial terms being ignored.The worst-case bound is given as a reference.The constants in the O() and Ω() have been ignored as their influence vanishes as n → ∞ in this coordinate system.

4 Figure 2
Figure 2Experimental results for the complexity of the convex hull of a perturbation of the regular n-gon inscribed in the unit circle.Left: Gaussian perturbation of variance σ 2 .Right: 2 perturbation of amplitude δ.Each data point corresponds to an average over 1000 experiments.

3
Split the original domain into cells of size r = O(σ/ √ log n).The input points are distributed evenly (up to constant factors) between Θ(1/r) of these cells.Each such cell contains m = O(rn) input points and contributes on average O(log m) dominating points -here considering dominating points make a difference.Altogether, the expected number of dominating points is O log(rn) r = O √ log n log(σn) σ

, Θ( 1 )
-sample of the unit sphere in R d and let P = {p i = p * i + δx i } where x 1 , x 2 , . . ., x n are independent random variables chosen uniformly in the unit ball.For any fixed k

2 d− 1 .n 2 1−d ≤ δ ≤ n 2 d+1If δ ≥ n 2 d+1 2 ,−1 2 yields to h = Θ δ d+1 2d n − 1 d≥
If then each relevant point p * contributes at most Vol (W ∩ (p * + δK)) , at least a constant fraction (depending only on d) of the relevant points contribute at least a fraction of that.It follows that Θ nh then again each relevant point p * contributes Θ h δ d+1 and the number of relevant points is Θ min nh d−1 2, n .Assuming that the minimum is realized as nh dΘ(1) meaning that W touches a linear number of B(p * i , δ) and a linear number of points are relevant.Thus, the number of relevant points is Θ(n), and this gives h = Θ n − 2 d+1 δ .Claim 12. Let W and W be two half-spaces that intersect K 1+δ with depth h and 9h respectively, then E [card(W ∩ P )] = O(E [card(W ∩ P )]).

log 4 n n 2 , I 2 = log 4 n n 2 , 2 the behaviour of 1 α 1 √
log n and I 3 = log n, +∞ .We further split I 2 by observing that for σ ≈ log 4 n n is dominated by√ g σ = O 4 log (n √ σ) √ σ whereas for σ ≈ √ log n it is dominated by g σ = O √ log n .(Inside I 3 , 1 α is always dominated by g σ .)Theswitch occurs around the solution σ 0 (n) of g = √ g, which solves into σ 0 (n) = Θ log n Assessing this relevance requires to quantify the stability of worst-case examples.This is precisely what the smoothed complexity captures.
E [card(CH ({p * 1 + x 1 , p * 2 + x 2 , . . ., p * n So d i and C i are random variables depending on P .)All hyperedges of H of size k that are induced by R i are, by (c') and the definition of d i , contained in C i .
, the ball B(p * , δ) is contained in W .It follows that Θ nh * ∈ P * such that (p * + K δ ) ∩ W = ∅ if h → 0,and Θ(n) otherwise.For the rest of this proof call such a point relevant.How much a relevant point contributes to E [card(W ∩ P )] depends on the magnitude of δ: If δ ≤ n 2 1−d then for at least a constant fraction (depending only on d) of the relevant points p *