Next Article in Journal
Blockchain PoS and PoW Consensus Algorithms for Airspace Management Application to the UAS-S4 Ehécatl
Previous Article in Journal
On Enhancement of Text Classification and Analysis of Text Emotions Using Graph Machine Learning and Ensemble Learning Methods on Non-English Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computation of the Hausdorff Distance between Two Compact Convex Sets

1
Department of Computational Medicine, University of California, Los Angeles, CA 90095, USA
2
Department of Human Genetics, University of California, Los Angeles, CA 90095, USA
3
Department of Statistics, University of California, Los Angeles, CA 90095, USA
Algorithms 2023, 16(10), 471; https://doi.org/10.3390/a16100471
Submission received: 4 September 2023 / Revised: 25 September 2023 / Accepted: 25 September 2023 / Published: 6 October 2023

Abstract

:
The Hausdorff distance between two closed sets has important theoretical and practical applications. Yet apart from finite point clouds, there appear to be no generic algorithms for computing this quantity. Because many infinite sets are defined by algebraic equalities and inequalities, this a huge gap. The current paper constructs Frank–Wolfe and projected gradient ascent algorithms for computing the Hausdorff distance between two compact convex sets. Although these algorithms are guaranteed to go uphill, they can become trapped by local maxima. To avoid this defect, we investigate a homotopy method that gradually deforms two balls into the two target sets. The Frank–Wolfe and projected gradient algorithms are tested on two pairs ( A , B ) of compact convex sets, where: (1) A is the box [ 1 , 1 ] translated by 1 and B is the intersection of the unit ball and the non-negative orthant; and (2) A is the probability simplex and B is the 1 unit ball translated by 1 . For problem (2), we find the Hausdorff distance analytically. Projected gradient ascent is more reliable than the Frank–Wolfe algorithm and finds the exact solution of problem (2). Homotopy improves the performance of both algorithms when the exact solution is unknown or unattained.

1. Introduction

The Hausdorff distance [1,2] between two compact sets A and B in a Euclidean space R p is defined as
d H ( A , B ) = max { d ( A , B ) , d ( B , A ) } ,
where
d ( A , B ) = max x A dist ( x , B ) , d ( B , A ) = max x B dist ( x , A ) ,
dist ( x , A ) = min y A x y , and dist ( x , B ) = min y B x y . Here, · denotes the standard Euclidean norm in R p . The Blaschke formula
d H ( A , B ) = max x | dist ( x , A ) dist ( x , B ) |
serves as an alternative definition of Hausdorff distance [3]. Wikipedia has a helpful entry on Hausdorff distance with a two-dimensional illustration. The theoretical value of Hausdorff distance stems from the fact that it turns the collection of compact sets into a complete separable metric space. In general, Hausdorff distance is challenging to compute.
Hausdorff distance has many applications. For instance, it is instrumental in defining continuity, compactness, and completeness for integral operators, differential operators, and Fourier transforms in functional analysis [4,5]. These concepts are in turn relevant to the analysis of the existence, uniqueness, and stability of solutions to various equations in mathematics and physics [6]. In computer vision, Hausdorff distance enables object recognition [7] and allows one to quantify the difference between two different representations of the same object [8]. Edge detection and pixelization are usually necessary preprocessing steps. Other applications include robotics [9], the fractal modeling of biological structures [10], and the numerical computation of attractors in dynamical systems [11].
The current paper derives and tests two new algorithms for computing the distance d H ( A , B ) between two compact convex sets. The previous work on this intrinsically interesting problem is mostly limited to finite point clouds, usually in two and three dimensions [12,13,14,15,16]. The formulas
d ( A , B ) = max x A min y B x y and d ( B , A ) = max x B min y A x y
can be naively implemented for finite sets A and B. The naive implementation benefits from fast software such as the Julia Distances package, which exploits matrix multiplication to find all Euclidean distances between the column vectors of two matrices. The ImageDistances.jl package (github.com/JuliaImages/Images.jl) appears to rely on the naive method [12] for computing Hausdorff distances. Once the distances d i j = x i y j are computed, the computational complexity of the naive method is O ( m n ) , where m and n equal the number of points x i A and y j B , respectively. This complexity can be reduced by various devices, as suggested in the cited references.
The algorithms for computing the Hausdorff distance between two polygons [17], two curves [18] in the plane, and a curve and a surface [19] represent exceptions to the discrete point-cloud method. More complicated sets defined by algebraic formulas can be attacked by pixelating the sets or sampling them at a dense set of random points. The methods of continuous optimization offer an attractive alternative to the various current methods. Although the calculation of d H ( A , B ) takes us outside the comfortable realm of convex optimization, the tools of convex calculus are highly pertinent. To their credit, these tools perform well in higher dimensions. It remains to be seen whether the Hausdorff distance will have practical value in shape recognition in this regime. It would be prudent to keep the possibility in mind.
To calculate d H ( A , B ) , it clearly suffices to calculate d ( A , B ) and d ( B , A ) separately and take the maximum. For many sets B, dist ( x , B ) is explicitly known or can be computed by an efficient algorithm [20,21]. The Euclidean distance dist ( x , B ) can be expressed as
dist ( x , B ) = x P B ( x ) ,
where P B ( x ) is the projection of x onto B. When B is convex, the projection operator P B ( x ) is single-valued. For closed nonconvex sets, the projection is multiple-valued for some x , but these points are very rare; indeed, they are of Lebesgue measure 0 [22]. The scaled squared distance 1 2 dist ( x , B ) 2 function is smoother than dist ( x , B ) . One can show that the former function is differentiable with gradient
1 2 dist ( x , B ) 2 = x P B ( x )
at all points x where P B ( x ) is single-valued [23].
The support function σ A ( v ) = max x A v x and the corresponding support set supp A ( v ) = argmax x A v x also play key roles in our algorithm development. The maximum of d ( A , B ) exists and necessarily occurs on the boundary of A. In fact, Bauer’s maximum principle [24,25] implies that the maximum is achieved at an extreme point x of A. The point of B corresponding to x occurs on the boundary of B, but not necessarily at an extreme point of B. The supporting set supp A ( v ) is a singleton if and only if σ A ( v ) is differentiable at v .
Our first algorithm for computing d ( A , B ) ,
x n + 1 supp A ( v n ) = supp A [ x n P B ( x n ) ]
is a Frank–Wolfe algorithm [26,27]. Our second algorithm,
x n + 1 = P A ( x n + v n ) = P A [ 2 x n P B ( x n ) ]
is a projected gradient algorithm [21,28]. Both algorithms force the objective function 1 2 dist ( x , B ) 2 uphill and are iterated until convergence. Because the algorithms can become trapped by local maxima, they are not infallible in finding a global maximum. To overcome this tendency, we introduce a homotopy method that gradually transitions the calculation of the Hausdorff distance from the simple case of two balls to the actual problem of finding d ( A , B ) . Homotopy is one of several heuristics for maximizing multi-modal functions [29]. The crucial difference between projected gradient ascent and the Frank–Wolfe algorithm is that one depends on projection while the other depends on both projection and supporting sets. This difference obviously favors projected gradient ascent.
As a roadmap to the rest of this paper, Section 2 presents (a) the basic notation, (b) a brief overview of the minorization–maximization principle that stands behind the new iterative algorithms (2) and (3), (c) a summary of the support functions and supporting sets, (d) the derivation of both algorithms, (e) a description of the homotopy method, and (f) an explanation of relevant convergence theory. Section 3 tests our two iterative algorithms and the point-cloud method on two representative problems. Both iterative algorithms are orders of magnitude faster than the point-cloud method and benefit from the homotopy heuristic. Projected gradient ascent is also more accurate than the point-cloud method. Section 4 summarizes our conclusions, mentions limitations, and suggests new avenues for research. Appendix A, proves some of the mathematical assertions made in the text and provides the full Julia code for our numerical examples. Note that the code is organized from bottom to top, with the main program occurring at the bottom.

2. Materials and Methods

As a prelude to the derivation of the two algorithms, it would be helpful to clarify our notation and make a few remarks about MM algorithms, support functions, and supporting sets. For projected gradient ascent and its homotopy modification, we provide algorithm flowcharts.

2.1. Notation

Here are the notational conventions used throughout this article. All vectors appear in boldface. All entries of the vectors 0 and 1 equal 0 or 1, respectively. The superscript indicates a vector transpose. The Euclidean norm of a vector x is denoted by x . For a smooth real-valued function f ( x ) , we write its gradient (column vector of partial derivatives) as f ( x ) and its first differential (row vector of partial derivatives) as d f ( x ) = f ( x ) . Finally, we denote the directional derivative of f ( x ) in the direction v by d v f ( x ) . When f ( x ) is differentiable, d v f ( x ) = d f ( x ) v .

2.2. MM Algorithms

The algorithms explored here are minorization–maximization (MM) algorithms [23,30]. They depend on surrogate functions g ( x x n ) that minorize the original objective f ( x ) around the current iterate x n in the sense of satisfying the tangency condition g ( x n x n ) = f ( x n ) and the domination condition g ( x x n ) f ( x ) for all x . The surrogate balances the two goals of hugging the objective tightly and simplifying maximization. Maximizing the surrogate produces the next iterate x n + 1 and drives the objective uphill because
f ( x n + 1 ) g ( x n + 1 x n ) g ( x n x n ) = f ( x n ) .
In minimization, the surrogate majorizes the objective and is instead minimized. The tangency condition remains the same, but the domination condition g ( x x n ) f ( x ) is now reversed. The celebrated EM (expectation–maximization) principle for maximum likelihood estimation with missing data [31] is a special case of minorization–maximization. In the EM setting, Jensen’s inequality supplies the surrogate as the expectation of the complete data log-likelihood conditional on the observed data.

2.3. Support Functions and Supporting Sets

The set of supporting points supp S ( v ) = argmax x S v x determines the support function σ S ( v ) . For instance, the 1 unit ball has supp S ( v ) equal to the convex hull of the vertices ± e i where | v i | is largest. For the unit simplex, supp S ( v ) equals the convex hull of the vertices e i where v i is largest. For a Minkowski sum A + B , supp A + B ( v ) = supp A ( v ) + supp B ( v ) . If S is either a convex cone or a compact convex set that is symmetric about the origin with a non-empty interior, then its support function σ S ( y ) has a special form. In the former case, σ S ( y ) is the indicator of the dual cone, and in the latter case, σ S ( y ) is a norm. The support function of a Cartesian product is the Cartesian product of the separate support functions. For instance, the support function of rectangle [ a , b ] reduces to the one-dimensional case, where supp [ a , b ] ( v ) is a when v i < 0 , b when v i > 0 , and all of [ a , b ] when v i = 0 . There are many other known support functions. For instance, one-sided penalties such as c max { y , 0 } and asymmetric penalties such as σ [ a , b ] ( y ) are covered by the current theory. Indeed, the former is the support function generated by the interval [ 0 , c ] . The latter is the tilted absolute value equal to b y for y 0 and to a y for y < 0 . The support function of a singleton { a } is the linear function a y . More generally, the support function of the convex hull of the set { a 1 , , a d } is max 1 i d a i y . The support function of the line segment from a to a is the absolute value | a y | . Adding a constant vector a to a set S produces the support function σ S ( y ) + a y . It is trivial to project onto S + a if one can project onto S. For any non-negative scalar c, the set c S has support function c σ S ( y ) . Again, it is trivial to project onto c S if one can project onto S.

2.4. Derivation of the Algorithms

When B is convex, the supporting hyperplane inequality
1 2 dist ( x , B ) 2 1 2 dist ( x n , B ) 2 + v n ( x x n )
for v n = x n P B ( x n ) generates our first algorithm. Maximizing this minorization over x A is equivalent to calculating the support function σ A ( v n ) = sup x A v n x . If supp A ( v ) denotes the set of points in A where σ A ( v ) is attained, then the Frank–Wolfe algorithm just described can be phrased as
x n + 1 supp A ( v n ) = supp A [ x n P B ( x n ) ] .
The MM principle guarantees that the next iterate x n + 1 will tend to increase the objective 1 2 dist ( x , B ) 2 unless v n = 0 . This exception occurs when x n B . Fortunately, when the iterates begin in A B , they remain in A B . Indeed, if x n A B but x n + 1 B , then the obtuse angle condition [23] requires
[ x n P B ( x n ) ] x n + 1 [ x n P B ( x n ) ] P B ( x n ) < [ x n P B ( x n ) ] x n ,
contradicting the optimality of x n + 1 . To achieve the requirement x 0 A B of the Frank–Wolfe method, we put x 0 = P A ( r ) for a random vector r and then check that P B ( x 0 ) x > 0 .
If B is closed and convex, then the gradient of the function 1 2 dist ( x , B ) 2 is Lipschitz with constant 1 [23]. This fact plus the outcome of completing the square entails the minorization
1 2 dist ( x , B ) 2 1 2 dist ( x n , B ) 2 + v n ( x x n ) 1 2 x x n 2 = 1 2 dist ( x n , B ) 2 1 2 x x n v n 2 + 1 2 v n 2 .
Hence, the MM principle implies that defining
x n + 1 = P A ( x n + v n ) = P A [ 2 x n P B ( x n ) ]
also increases 1 2 dist ( x , B ) 2 . This second of our two algorithms is a special case of projected gradient ascent. Its flowchart (Algorithm 1) summarizes this straightforward strategy started from many random points x 0 .
Algorithm 1 Computation of d ( A , B ) by Projected Gradient Ascent
  • Require: Projection operators P A ( x ) and P B ( y ) , initial point x 0 , maximum iterations n, and convergence criterion ϵ > 0 .
  • 1:
    x = x 0
    2:
    for  i t e r = 1 : n  do
    3:
         x new = P A [ 2 x P B ( x ) ]
    4:
        if  x new x < ϵ  then
    5:
            break
    6:
        else
    7:
             x = x new
    8:
        end if
    9:
    end for
    10:
    Return x new

2.5. A Homotopy Method

Although both algorithms are guaranteed to increase the objective, they both suffer from the danger of being trapped by local maxima. One obvious remedy is to launch the algorithms from different random points. A more systematic alternative is to exploit homotopy. The idea is to gradually deform both sets A and B from the unit ball U at the origin, where d H ( U , U ) = 0 is known, into the target sets A and B. In practice, we follow the solution path along the family of set pairs [ t A + ( 1 t ) U , t B + ( 1 t ) U ] from t = 0 to t = 1 . This strategy is viable for projected gradient ascent because we can project points onto the Minkowski convex combination t C + ( 1 t ) D by three devices. First, it is well known that when A and B are balls with radii r A and r B and centers c A and c B , respectively, the distance dist ( x , B ) is maximized by taking x = c A r A ( c B c A ) c B c A , unless A B , in which case the maximum is 0 [32]. For the convenience of the reader, Proposition A1 of Appendix A proves this assertion. Second, one can exploit the projection identity P t S ( z ) = t P S ( t 1 z ) for any t > 0 . Third, there is an effective algorithm for projecting onto a Minkowski sum C + D [33]. The idea is to alternate the minimization of z c d with respect to c C and d D . The iteration scheme c n + 1 = P C ( z d n ) and d n + 1 = P D ( z c n + 1 ) is guaranteed to converge at a linear rate when either set is strongly convex. Recall that a convex K is strongly convex if there exists an r > 0 such that
α x + ( 1 α ) y + r 2 α ( 1 α ) x y 2 z K
for all x and y in K, α [ 0 , 1 ] , and unit vectors z [34]. In particular, ( 1 t ) U is strongly convex when t [ 0 , 1 ) . Furthermore, the Cartesian product of two strongly convex sets is strongly convex [35]. The homotopy method succeeds because the early sets are more rounded and the objective generates fewer local maxima. The price for better performance is iterations within iterations and an overall slower algorithm. Algorithms 2 and 3 summarize our homotopy strategy for projected gradient ascent.
Algorithm 2 Minkowski Set Projection
  • Require: Projection operators P A ( x ) and P B ( y ) , external point y , convexity constant c, maximum iterations n, and convergence criterion ϵ > 0 .
  • 1:
    a = b = 0
    2:
    for  i t e r = 1 : n  do
    3:
         a new = c P A [ ( y b ) / c ]
    4:
         b new = ( 1 c ) P B [ ( y a new ) / ( 1 c ) ]
    5:
        if  a new a + b new b < ϵ  then
    6:
            break
    7:
        else
    8:
             a = a new
    9:
             b = b new
    10:
        end if
    11:
    end for
    12:
    Return a new and b new
Algorithm 3 Homotopy Modification of Projected Gradient Ascent
  • Require: Projection operators P A ( x ) and P B ( x ) , centers c A and c B , and homotopy points h.
  • 1:
    Set d = c A c B and x = c A ( c B c A ) d ▹ distance between two balls of radius 1
    2:
    Let P U be projection onto the unit ball
    3:
    for  i = 1 : h 2  do ▹ intermediate homotopy phases
    4:
         c = i h 1
    5:
        Put P M A = Minkowski sum projection for P A and P U and constant c
    6:
        Put P M B = Minkowski sum projection for P B and P U and constant c
    7:
        Perform projected gradient ascent with P M A , P M B , and initial point x
    8:
        Let x be the outcome
    9:
    end for
    10:
    Perform projected gradient ascent with P A and P B and initial point x
    11:
    Return converged value of x ▹ output final phase of homotopy
For the Frank–Wolfe algorithm, similar homotopy tactics apply. For a Minkowski sum C + D , supp C + D ( v ) = supp C ( v ) + supp D ( v ) . This fact plus the identity supp t C ( v ) = supp C ( t v ) for t 0 makes it possible to carry out the homotopy method.

2.6. Convergence

Because this topic has been covered in previous studies [21,36,37,38,39], we give an abbreviated treatment here. Each algorithm is summarized by a closed algorithm map x n + 1 M ( x n ) that increases the objective f ( x ) . The limit points of the map occur among the stationary points of f ( x ) . By definition, a stationary point x satisfies d v f ( x ) = d f ( x ) v 0 for all tangent vectors v at x . The set of tangent vectors v is the closure of the set of points c ( y x ) with y C and c > 0 . This is a place where the convexity of C comes into play. Hence, x is a stationary point if and only if d f ( x ) x d f ( x ) y for all y C . With this distinction in mind, we state our basic theoretical findings for the Frank–Wolfe method. Homotopy is omitted in these considerations.
Proposition 1. 
The limit points of the Frank–Wolfe iterates (2) are stationary points of the objective f ( x ) = min y B x y on A. Furthermore, the bound
min 0 k n max y A d f ( x k ) ( y x k ) 1 n + 1 [ max x A f ( x ) f ( x 0 ) ]
holds. Thus, the stationary condition max y A d f ( x ) ( y x ) 0 is reasonable to expect at a limit point x of the Frank–Wolfe algorithm.
Here is the corresponding finding for projected gradient ascent.
Proposition 2. 
The limit points of the projected gradient ascent iterates (3) are also stationary points of the objective f ( x ) = min y B x y on A. Furthermore, the bound
min 0 k n x k + 1 x k 2 ( n + 1 ) [ f ( x 0 ) min x A f ( x ) ]
holds.
Although the convergence rate O ( 1 n ) of projected gradient ascent is slower than the corresponding slow convergence rate O ( 1 n ) of the Frank–Wolfe method, in practice, both algorithms usually converge in fewer than 100 iterations. In the case of the Frank–Wolfe algorithm, each iterate is an extreme point. Many convex sets possess only a finite number of extreme points, and convergence to one of them is guaranteed. Unfortunately, the converged point often provides just a local maximum.

3. Results

We tested the Frank–Wolfe and projected gradient ascent algorithms on two pairs ( A , B ) of compact convex sets: (1) where A is the box [ 1 , 1 ] translated by 1 and B is the intersection of the unit ball and the non-negative orthant; and (2) where A is the probability simplex and B is the 1 unit ball translated by 1 . These examples are representative, and for the second pair one can show that d H ( A , B ) = p , where p is the dimension of the ambient space. See Proposition A2 of Appendix A. Table 1 presents our findings. The computation times are in seconds per trial across 100 random initializations and appear to scale affinely (a constant plus linear) in p. The columns Maximum, Mean, and Std convey summary statistics of the converged values of the Hausdorff distance. The point-cloud method generated 10 4 random points to mimic each continuous set. Because the point-cloud method is non-iterative, a single run captured its performance. For the record, all computations were carried out on a MacBook Pro with a 2.3 GHz 8-core i9 chip and 16 GB of memory. Although the algorithms were embarrassingly parallel across trials, our Julia code is completely serial.
The random point-cloud method was not remotely competitive with projected gradient ascent in either accuracy or speed on these sample problems. It did produce approximate distances that confirmed the best results of the iterative methods. As measured by the quality of its solution, projected gradient ascent also outperformed the Frank–Wolfe algorithm. The Frank–Wolfe method was probably too aggressive, perhaps because it moved directly to an extreme point of A in computing d ( A , B ) . On the second problem, projected gradient ascent attained the global maximum across all trials. When the standard deviation of the converged values is positive, it follows that some trials were trapped by inferior local maxima. Both iterative algorithms benefit from the homotopy heuristic, which is fully deterministic. Accordingly, the standard deviations under homotopy equaled 0. Homotopy increased the computation times by less than an order of magnitude for 11 evenly spaced homotopy points. Finally, as p increased, both problems appeared easier to solve by the iterative methods. This behavior was particularly evident when p = 1000 . In contrast, the random point-cloud solutions deteriorated as p increased.

4. Discussion

The Hausdorff distance problem is intrinsically interesting, with theoretical applications throughout mathematics and practical applications in image processing. Given the non-convexity of the problem, it has not received nearly the attention in the mathematical literature as the closest point problem. Exact values of d H ( A , B ) are available in a few special cases such as the two highlighted in our appendix. Research on fast algorithms tends to be limited to random point clouds. Infinite sets defined by mathematical formulas have been largely ignored. The current paper partially rectifies this omission and demonstrates the value of continuous optimization techniques. The Frank–Wolfe and projected gradient ascent algorithms are relatively easy to code and extremely fast, even in high dimensions. Our preliminary experiments tilt toward projected gradient ascent as the more reliable of the two options. A naive implementation of the point-cloud method is not remotely competitive with projected gradient ascent. More exhaustive studies are warranted beyond the proof of principle examples presented here.
The standard convergence arguments covered in Section 2.6 guarantee that all limit points of the two algorithm classes are stationary points. In practice, convergence appears much faster than the slow rates mentioned in Propositions 1 and 2. We suspect, but have not proved, that full convergence to a stationary point always occurs. This exercise would require a foray into the difficult terrain of real algebraic geometry [40]. In any event, convergence to a global maximum is not guaranteed. Fortunately, safeguards can be put in place to improve the chances of successful convergence. The homotopy method capitalizes on the exact distance between two balls. Minkowski set rounding smooths the boundary of the target sets and steers iterates in a productive direction.
The computation of the Hausdorff distance d H ( A , B ) is apt to be much more challenging when either A or B is non-convex. Many sets can be represented as finite unions of compact convex sets. If A = i A i and B = j B j , then the computation of d H ( A , B ) reduces to the computation of d ( A i , B ) for each index i and d ( B j , A ) for each index j. The identities d ( A , B ) = max i d ( A i , B ) and d ( B , A ) = max i d ( B j , A ) make this claim obvious. The further identity dist ( x , B ) = min j dist ( x , B j ) implies that
d ( A , B ) = max i d ( A i , B ) = max i max x A i min j dist ( x , B j ) .
In general, saddlepoint problems of this sort are hard to solve. One possible line of attack is to minorize min j dist ( x , B j ) and then maximize the minorization.
The validation and implementation of this strategy will require substantial effort beyond the introduction to the problem pursued in the current paper. Let us merely add that the fast implementation of a more general Hausdorff distance algorithm will depend on the nature of the candidate sets A i and B j . In the plane, triangles are appealing [41,42]. It is straightforward to project onto a triangle, and a triangle by definition possesses exactly three extreme points. Furthermore, a great deal of research under the heading of finite elements has identified good algorithms for triangulating complicated regions of the plane. The software Triangle for generating two-dimensional meshes and Delaunay triangulations is surely pertinent [43]. The triangularization of surfaces forms part of the MESH software for Hausdorff distance estimation [44].
We hope this paper will provoke a greater focus on the Hausdorff distance problem. As a prototype non-convex problem, it is worthy of far more attention. Continuous optimization tools can be brought to bear on the problem and may ultimately generate more efficient algorithms than the discrete algorithms designed for finite point clouds. The fact that our algorithms scale well in higher dimensions is a plus. We would also like to highlight the illumination that the MM principle brings to the construction of new high-dimensional optimization algorithms, including those considered here. Although neglected in the past, MM may well be the single most unifying principle of algorithm construction in continuous optimization.

Funding

Research supported in part by USPHS grants GM53275 and HG006139.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Appendix A.1. Proofs

Proposition A1. 
If A and B are balls with radii r A and r B and centers c A and c B , respectively, then dist ( x , B ) = x c B r B for x B , and d ( A , B ) = c A c B + r A r B , unless A B , in which case the maximum is 0.
Proof. 
The first assertion is obvious. To maximize dist ( x , B ) over A, we form the Lagrangian
L ( x , λ ) = 1 2 x c B 2 + 1 2 λ ( x c A 2 r A 2 ) .
The stationary condition
0 = x + c B + λ ( x c A )
implies that
x c A = c B λ c A 1 λ c A = ( c B c A ) 1 λ
and determines λ through
r A 2 = c B c A 2 ( 1 λ ) 2 .
It follows that
x = c A ± r A ( c B c A ) c B c A
and that
dist ( x , B ) = x c B r B = c A c B ± r A ( c B c A ) c B c A r B = | 1 r A c A c B | c A c B r B .
Geometrically, max x A dist ( x , B ) = c A c B + r A r B should hold, so x = c A r A ( c B c A ) c B c A gives the correct sign.    □
Proposition A2. 
If A is the probability simplex in R p and B is the 1 ball U translated by 1 , then d H ( A , B ) = p .
Proof. 
Consider first d ( A , B ) . The maximum of d ( x , B ) occurs at an extreme point of A, say x = ( 1 , 0 , , 0 ) = e 1 by symmetry. On U the convex function
f ( y ) = y + 1 e 1 2 = y 1 2 + i > 1 ( y i + 1 ) 2
achieves its minimum value when all y i are equal for i > 1 . The common value z should satisfy z 0 , while y 1 can have either sign. For y 1 [ 0 , 1 ] , we accordingly minimize y 1 2 + ( p 1 ) ( z + 1 ) 2 subject to y 1 ( p 1 ) z 1 . We can decrease z until y 1 ( p 1 ) z = 1 and solve for z = y 1 1 p 1 . Thus, we must minimize q ( y 1 ) = y 1 2 + ( p 1 ) ( y 1 + p 2 ) 2 ( p 1 ) 2 over [ 0 , 1 ] . Now the stationary point p 2 p of q ( y 1 ) falls outside [ 0 , 1 ] , so the minimum occurs at either 0 or 1. Because q ( 0 ) = ( p 2 ) 2 ( p 1 ) and q ( 1 ) = 1 + p 1 = p , the point 0 wins, and d ( A , B ) = ( p 2 ) 2 p 1 .
Next, consider d ( B , A ) . The maximum of d ( y , A ) occurs at an extreme point of B, say y = ± e 1 + 1 by symmetry. On A the convex function
f ( x ) = x e 1 1 2 = ( x 1 1 1 ) 2 + i > 1 ( x i 1 ) 2
is maximized by taking y = e 1 + 1 . We make this choice and again assume x i = z for i > 1 . Now z 0 , and we increase z until x 1 + ( p 1 ) z = 1 . This gives z = 1 x 1 p 1 and reduces f ( x ) to the quadratic
q ( x 1 ) = x 1 2 + ( p 1 ) ( 2 p x 1 ) 2 ( p 1 ) 2 .
Now the stationary point p 2 p of q ( x 1 ) falls outside [ 0 , 1 ] , so the minimum occurs at either 0 or 1. Thus, q ( 0 ) = ( p 2 ) 2 p 1 and q ( 1 ) = 1 + p 1 = p , and the point 1 wins. Finally, d H ( A , B ) = max ( p 2 ) 2 ( p 1 ) , p = p .    □

Appendix A.2. Julia Computer Code

using LinearAlgebra, Distances, Random, StatsBase
"""Generates a random point in the box [a,b]."""
function RandomBox(a, b)
  n = length(a)
  return a + rand(n) .* (b - a)
end
"""Generates a random point in a Euclidean ball."""
function RandomBall(radius, center)
  n = length(center)
  x = randn(n)
  x = x / norm(x)
  r = rand()^(1 / n)
  return (radius * r) * x + center
end
"""Generates a random point in the intersection of a ball centered
at the origin and the nonnegative orthant."""
function RandomBallOrthant(radius, n)
  x = RandomBall(radius, zeros(n))
  return abs.(x)
end
"""Generates a random point in the probability simplex."""
function RandomSimplex(n)
  x = -log.(rand(n))
  return x / sum(x)
end
"""Generates a random point in an L1 ball."""
function RandomL1Ball(radius, center)
  n = length(center)
  x = -log.(rand(n))
  x = x / sum(x)
  for i = 1:n
    if rand() < 1 / 2
      x[i] = - x[i]
    end
  end
  r = rand()^(1 / n)
  return (radius * r) * x + center
end
"""Computes the Hausdorff distance between the point sets A and B."""
function hausdorff(A, B)
  D = pairwise(Euclidean(), A, B)
  dAB = maximum(minimum(D, dims = 2))
  dBA = maximum(minimum(D, dims = 1))
  return max(dAB, dBA)
end
"""Projects the point y onto a re-centered set."""
function RecenterProjection(Proj, y::Vector{T}, c::Vector{T}) where T <: Real
  return Proj(y - c) + c # set is translated by c
end
"""Projects the point y onto a scaled set."""
function ScaleProjection(Proj, y::Vector{T}, s::T) where T <: Real
  return s * Proj(y / s) # s > 0 is the scaling factor
end
"""Projects the point y onto the closed ball with radius r."""
function BallProjection(y::Vector{T}, r = one(T)) where T <: Real
#
  distance = norm(y)
  if distance > r
    return (r / distance) * y
  else
    return y
  end
end
"""Projects the point y onto the closed box with bounds a and b."""
function BoxProjection(y::Vector{T}, a = -ones(T, length(y)),
  b = ones(T, length(y))) where T <: Real
#
  return clamp.(y, a, b)
end
"""Projects the point y onto the simplex {x | x >= 0, sum(x) = r}."""
function SimplexProjection(y::Vector{T}, r = one(T)) where T <: Real
#
  n = length(y)
  z = sort(y, rev = true)
  (s, lambda) = (zero(T), zero(T))
  for i = 1:n
    s = s + z[i]
    lambda = (s - r) / i
    if i < n && lambda < z[i] && lambda >= z[i + 1]
      break
    end
  end
  return max.(y .- lambda, zero(T))
end
"""Projects the point y onto the ell_1 ball with radius r."""
function L1BallProjection(y::Vector{T}, r = one(T)) where T <: Real
#
  p = abs.(y)
  if norm(p, 1) <= r
    return y
  else
    x = SimplexProjection(p, r)
    return sign.(y) .* x
  end
end
"""Projects the point y onto the intersection of the ball of
radius r and the nonnegative orthant."""
function BallAndOrthantProjection(y::Vector{T}, r = one(T)) where T <: Real
#
  x = copy(y)
  x = max.(x, zero(T)) # project onto orthant
  return (r / max(norm(x), r)) .* x
end
"""Finds the support point for y on the inflated unit ball."""
function BallSupp(y::Vector{T}, r = one(T)) where T <: Real
#
  return (r / norm(y)) * y
end
"""Finds the support point for y on the box [a, b]."""
function BoxSupp(y::Vector{T}, a = -ones(T, length(y)),
  b = ones(T, length(y))) where T <: Real
#
  n = length(y)
  x = zeros(T, n)
  for i = 1:n
    if y[i] > zero(T)
      x[i] = b[i]
    elseif y[i] < zero(T)
      x[i] = a[i]
    else
      x[i] = (a[i] + b[i]) / 2
    end
  end
  return x
end
"""Finds the support point for y on the simplex {x | x >= 0, sum(x) = r}."""
function SimplexSupp(y::Vector{T}, r = one(T)) where T <: Real
#
  x = zeros(T, length(y))
  (v, m) = findmax(y)
  x[m] = r
  return x
end
"""Finds the support point for y on the L1 ball."""
function L1BallSupp(y::Vector{T}, r = one(T)) where T <: Real
#
  x = zeros(T, length(y))
  (v, m) = findmax(abs, y)
  x[m] = sign(y[m]) * r
  return x
end
"""Finds the support point for y on the intersection of the ball of
radius r and the nonnegative orthant."""
function BallAndOrthantSupp(y::Vector{T}, r = one(T)) where T <: Real
#
  x = max.(y, zero(T))
  if sum(x) <= zero(T)
    return zeros(T, length(y))
  else
    return (r / norm(x)) * x
  end
end
"""Projects the point y onto the Minkowski rounded set
R = c * S + (1 - c) * B. Here B is the unit ball, Proj
is projection onto S, and Proj_R(y) = a + b."""
function MinkowskiNear(Proj, y, c, conv)
#
  n = length(y)
  (aold, bold) = (zeros(n), zeros(n))
  (anew, bnew) = (zeros(n), zeros(n))
  for iter = 1:100
    anew = c .* Proj((y - bold) ./ c) # project onto c * S
    bnew = (1 - c) .* BallProjection((y - anew) ./ (1 - c))
    if norm(aold - anew) + norm(bold - bnew) < conv
      break
    else
      @. aold = anew
      @. bold = bnew
    end
  end
  return anew + bnew
end
"""Finds the farthest point on A from B by Frank-Wolfe."""
function FrankWolfe(SuppA, PB, x0)
  (xold, xnew) = (copy(x0), similar(x0))
  for iter = 1:100
    xnew = SuppA(xold - PB(xold))
    if norm(xnew - xold) < 1.0e-10
      break
    else
      xold .= xnew
    end
  end
  far = norm(xnew - PB(xnew))
  return (far, xnew)
end
"""Finds the farthest point on A from B by projected gradient ascent."""
function farthest(PA, PB, x0)
  (xold, xnew) = (copy(x0),copy(x0))
  for iter = 1:100
    xnew = PA(2xold - PB(xold))
    if norm(xnew - xold) < 1.0e-10
      break
    else
      xold .= xnew
    end
  end
  far = norm(xnew - PB(xnew))
  return (far, xnew)
end
"""Finds the farthest point on A from B by homotopy."""
function farthest_homotopy(PA, PB, SA, CenterA, CenterB, x0, n, method)
  x = BallProjection(x0)
  (far, homotopy_points, conv) = (0.0, 10, 1.0e-10)
  for iter = 0:homotopy_points
    if iter == 0 # ball to ball
      d = norm(CenterA - CenterB)
      (far, x) = (d, (1 + 1 / d) * CenterA - CenterB / d)
    elseif iter == homotopy_points # d(A, B)
      if method == "proj grad"
        (far, x) = farthest(PA, PB, x)
      elseif method == "Frank-Wolfe"
        (far, x) = FrankWolfe(SA, PB, x)
      end
    else # intermediate case
      t = iter / homotopy_points
      PMB(z) = MinkowskiNear(PB, z, t, conv)
      if method == "proj grad"
        PMA(z) = MinkowskiNear(PA, z, t, conv)
        (far, x) = farthest(PMA, PMB, x)
      elseif method == "Frank-Wolfe"
        SM(z) = SA(t * z) + BallSupp((1 - t) * z)
        (far, x) = FrankWolfe(SM, PMB, x)
      end
    end
  end
  return (far, x)
end
"""Orchestrates Hausdorff distance estimation."""
function master(ProjA, ProjB, SuppA, SuppB, CenterA, CenterB, method,
  homotopy, n, trials, io)
#
  (count, tries, optimum, obj) = (0, 100, 0.0, zeros(trials))
  x0 = zeros(n)
  PA(z) = RecenterProjection(ProjA, z, CenterA)
  PB(z) = RecenterProjection(ProjB, z, CenterB)
  for trial = 1:trials
    success = false
    for i = 1:tries # find a point in A \ B
      x0 = PA(randn(n))
      if norm(PB(x0) - x0) > 1.0e-10
        success = true
        break
      end
    end
    if homotopy # solve for d(A, B)
      (objA, xA) = farthest_homotopy(PA, PB, SuppA, CenterA, CenterB,
        x0, n, method)
    else
      if method == "proj grad" && success # solve for d(A, B)
        (objA, xA) = farthest(PA, PB, x0)
      elseif method == "Frank-Wolfe" && success
        (objA, xA) = FrankWolfe(SuppA, PB, x0)
      else
        objA = 0.0
      end
    end
    success = false
    for i = 1:tries # find point in B \ A
      x0 = PB(randn(n))
      if norm(PA(x0) - x0) > 1.0e-10
      success = true
        break
      end
    end
    if homotopy # solve for d(B, A)
      (objB, xB) = farthest_homotopy(PB, PA, SuppB, CenterB, CenterA,
         x0, n, method)
    else
      if method == "proj grad" && success
        (objB, xB,) = farthest(PB, PA, x0)
      elseif method == "Frank-Wolfe" && success
        (objB, xB) = FrankWolfe(SuppB, PA, x0)
      else
        objB = 0.0
      end
    end
    obj[trial] = max(objA, objB) # Hausdorff distance
    if obj[trial] > optimum + 10.0e-10 # update count of maximum distance
      count = 1
      optimum = obj[trial]
    elseif obj[trial] > optimum - 10.0e-8
      count = count + 1
    end
  end
  (avg, stdev) = (mean(obj), std(obj))
  if stdev < 1.0e-10 stdev = 0.0 end
  return (fraction = count / trials, optimum, avg, stdev)
end
outfile = "Hausdorff.out";
io = open(outfile, "w");
trials = 100;
points = 10000
println(io,"Set Pair"," & ","p"," & ","Method"," & ","Homotopy"," & ",
  "Maximum"," & ","Mean"," & ","Std"," &  ","Seconds"," \\ ")
for n in [2, 3, 10, 100, 1000]
  for i = 1:2
    if i == 1
      CenterA = ones(n)
      CenterB = zeros(n)
      ProjA = BoxProjection
      ProjB = BallAndOrthantProjection
      SuppA = BoxSupp
      SuppB = BallAndOrthantSupp
      title = "dH(box, ball and orthant)"
    elseif i == 2
      CenterA = zeros(n)
      CenterB = ones(n)
      ProjA = SimplexProjection
      ProjB = L1BallProjection
      SuppA = SimplexSupp
      SuppB = L1BallSupp
      title = "dH(simplex, L1 ball)"
    end
#
    (method, homotopy) = ("proj grad", false);
    Random.seed!(1234)
    time = @elapsed (fraction, optimum, avg, stdev) = master(ProjA,
      ProjB, SuppA, SuppB, CenterA, CenterB, method, homotopy, n,
      trials, io)
    println(io,title," & ",n," & ",method," & ",homotopy," & ",
        round(optimum, sigdigits=5)," & ",round(avg, sigdigits=5)," & ",
        round(stdev, sigdigits=5)," & ",round(time/trials, sigdigits=3)," \\ ")
#
    (method, homotopy) = ("proj grad", true);
    Random.seed!(1234)
    time = @elapsed (fraction, optimum, avg, stdev) = master(ProjA,
      ProjB, SuppA, SuppB, CenterA, CenterB, method, homotopy, n,
      trials, io)
    println(io,title," & ",n," & ",method," & ",homotopy," & ",
      round(optimum, sigdigits=5)," & ",round(avg, sigdigits=5)," & ",
      round(stdev, sigdigits=5)," & ",round(time/trials,sigdigits=3)," \\ ")
#
    (method, homotopy) = ("Frank-Wolfe", false);
    Random.seed!(1234)
    time = @elapsed (fraction, optimum, avg, stdev) = master(ProjA,
      ProjB, SuppA, SuppB, CenterA, CenterB, method, homotopy, n,
      trials, io)
    println(io,title," & ",n," & ",method," & ",homotopy," & ",
      round(optimum, sigdigits=5)," & ",round(avg, sigdigits=5)," & ",
      round(stdev, sigdigits=5)," & ",round(time/trials,sigdigits=3)," \\ ")
#
    (method, homotopy) = ("Frank-Wolfe", true);
    Random.seed!(1234)
    time = @elapsed (fraction, optimum, avg, stdev) = master(ProjA,
      ProjB, SuppA, SuppB, CenterA, CenterB, method, homotopy, n,
      trials, io)
    println(io,title," & ",n," & ",method," & ",homotopy," & ",
      round(optimum, sigdigits=5)," & ",round(avg, sigdigits=5)," & ",
      round(stdev, sigdigits=5)," & ",round(time/trials,sigdigits=3)," \\ ")
#
    (method, homotopy) = ("point cloud", false);
    Random.seed!(1234)
    points = 10000
    A = zeros(n, points)
    B = zeros(n, points)
    if i == 1
      (a, b) = (ones(n), 2 * ones(n))
      for j = 1:points
        A[:, j] = RandomBox(a, b)
        B[:, j] = RandomBallOrthant(1.0, n)
      end
    else
      for j = 1:points
        A[:, j] = RandomSimplex(n)
        B[:, j] = RandomL1Ball(1.0, ones(n))
      end
    end
    time = @elapsed optimum = hausdorff(A, B)
    println(io,title," & ",n," & ",method," & ",homotopy," & ",
      round(optimum, sigdigits=5)," & ",round(optimum, sigdigits=5)," & ",
      round(0.0, sigdigits=5)," & ",round(time, sigdigits=3)," \\ ")
  end
end
close(io)

References

  1. Aubin, J.-P. Applied Abstract Analysis; Wiley: New York, NY, USA, 1977. [Google Scholar]
  2. Munkres, J. Topology, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  3. Conci, A.; Kubrusly, C.S. Distance between sets: A survey. Adv. Math. Sci. Appl. 2017, 26, 1–18. [Google Scholar]
  4. Ortigosa, R.; Martínez-Frutos, J.; Mora-Corral, C.; Pedregal, P.; Periago, F. Optimal control of soft materials using a Hausdorff distance functional. SIAM J. Control. Optim. 2021, 59, 393–416. [Google Scholar] [CrossRef]
  5. Sendov, B. Some questions of the theory of approximations of functions and sets in the Hausdorff metric. Russ. Math. Surv. 1969, 24, 143–183. [Google Scholar] [CrossRef]
  6. Cornean, H.D.; Purice, R. On the regularity of the Hausdorff distance between spectra of perturbed magnetic Hamiltonians. In Spectral Analysis of Quantum Hamiltonians: Spectral Days 2010; Springer: Berlin/Heidelberg, Germany, 2012; pp. 55–66. [Google Scholar]
  7. Kumar, K.S.; Manigandan, T.; Chitra, D.; Murali, L. Object recognition using Hausdorff distance for multimedia applications. Multimed. Tools Appl. 2020, 79, 4099–4114. [Google Scholar] [CrossRef]
  8. Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.J. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef]
  9. Lertchuwongsa, N.; Gouiffes, M.; Zavidovique, B. Enhancing a disparity map by color segmentation. Integr.-Comput.-Aided Eng. 2012, 19, 381–397. [Google Scholar] [CrossRef]
  10. Barnsley, M.F.; Massopust, P.; Strickl, H.; Sloan, A.D. Fractal modeling of biological structures. Ann. N. Y. Acad. Sci. 1987, 504, 179–194. [Google Scholar] [CrossRef]
  11. Aulbach, B.; Rasmussen, M.; Siegmund, S. Approximation of attractors of nonautonomous dynamical systems. Discret. Contin. Dyn. Syst. Ser. B 2005, 5, 215–238. [Google Scholar]
  12. Dubuisson, M.-P.; Jain, A.K. A modified Hausdorff distance for object matching. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 566–568. [Google Scholar]
  13. Kim, I.-S.; McLean, W. Computing the Hausdorff distance between two sets of parametric curves. Commun. Korean Math. Soc. 2013, 28, 833–850. [Google Scholar] [CrossRef]
  14. Rote, G. Computing the minimum Hausdorff distance between two point sets on a line under translation. Inf. Process. Lett. 1991, 38, 123–127. [Google Scholar] [CrossRef]
  15. Taha, A.A.; Hanbury, A. An efficient algorithm for calculating the exact Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 2153–2163. [Google Scholar] [CrossRef]
  16. Zhang, D.; He, F.; Han, S.; Zou, L.; Wu, Y.; Chen, Y. An efficient approach to directly compute the exact Hausdorff distance for 3D point sets. Integr. Comput.-Aided Eng. 2017, 24, 261–277. [Google Scholar] [CrossRef]
  17. Atallah, M.J. A linear time algorithm for the Hausdorff distance between convex polygons. Inf. Process. Lett. 1983, 17, 207–209. [Google Scholar] [CrossRef]
  18. Belogay, E.; Cabrelli, C.; Molter, U.; Shonkwiler, R. Calculating the Hausdorff distance between curves. Inf. Process. Lett. 1997, 64, 17–22. [Google Scholar] [CrossRef]
  19. Elber, G.; Grandine, T. Hausdorff and minimal distances between parametric freeforms in R2 and R3. In Proceedings of the International Conference on Geometric Modeling and Processing, Hangzhou, China, 23–25 April 2008; pp. 191–204. [Google Scholar]
  20. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  21. Beck, A. First-Order Methods in Optimization; SIAM: Philadelphia, PA, USA, 2017. [Google Scholar]
  22. Keys, K.L.; Zhou, H.; Lange, K. Proximal distance algorithms: Theory and practice. J. Mach. Learn. Res. 2019, 20, 2384–2421. [Google Scholar]
  23. Lange, K. MM Optimization Algorithms; SIAM: Philadelphia, PA, USA, 2016. [Google Scholar]
  24. Bauer, H. Minimalstellen von funktionen und extremalpunkte. Arch. Math. 1958, 9, 389–393. [Google Scholar] [CrossRef]
  25. Clarke, F. Functional Analysis, Calculus of Variations and Optimal Control; Springer: New York, NY, USA, 2013. [Google Scholar]
  26. Frank, M.; Wolfe, P. An algorithm for quadratic programming. Nav. Res. Logist. Q. 1956, 3, 95–110. [Google Scholar] [CrossRef]
  27. Jaggi, M. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In Proceedings of the International Conference on Machine Learning, PMLR, Atlanta, Georgia, 17–19 June 2013; pp. 427–435. [Google Scholar]
  28. Parikh, N.; Boyd, S. Proximal algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  29. Zhou, H.; Lange, K. On the bumpy road to the dominant mode. Scand. J. Stat. 2009, 37, 612–631. [Google Scholar] [CrossRef]
  30. Hunter, D.R.; Lange, K. A tutorial on MM algorithms. Am. Stat. 2004, 58, 30–37. [Google Scholar] [CrossRef]
  31. McLachlan, G.J.; Krishnan, T. The EM Algorithm and Extensions, 2nd ed.; Wiley: Hoboken, NJ, USA, 2008. [Google Scholar]
  32. Marošević, T. The Hausdorff distance between some sets of points. Math. Commun. 2018, 23, 247–257. [Google Scholar]
  33. Won, J.-H.; Zu, J.; Lange, K. Projection onto Minkowski sums with application to constrained learning. In Proceedings of the 36th International Conference on Machine Learning 2019, Long Beach, CA, USA, 9–15 June 2019; pp. 3642–3651. [Google Scholar]
  34. Garber, D.; Hazan, E. Faster rates for the Frank-Wolfe method over strongly-convex sets. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 7–9 July 2015; pp. 541–549. [Google Scholar]
  35. Majeed, S.N. On strongly E-convex sets and strongly E-convex cone sets. J. AL-Qadisiyah Comput. Sci. Math. 2019, 11, 52–59. [Google Scholar]
  36. Beck, A. Introduction to Nonlinear Optimization: Theory, Algorithms, and Applications with MATLAB; SIAM: Philadelphia, PA, USA, 2014. [Google Scholar]
  37. Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
  38. Lacoste-Julien, S. Convergence rate of Frank-Wolfe for non-convex objectives. arXiv 2016, arXiv:1607.00345. [Google Scholar]
  39. Lange, K. Closest Farthest Widest. 2023; unpublished. [Google Scholar]
  40. Lange, K.; Won, J.-H.; Landeros, A.; Zhou, H. Nonconvex optimization via MM algorithms: Convergence theory. In Wiley StatsRef: Statistics Reference Online; Wiley Online Library: Hoboken, NJ, USA, 2020; pp. 1–22. [Google Scholar]
  41. Bartoň, M.; Hanniel, I.; Elber, G.; Kim, M.-S. Precise Hausdorff distance computation between polygonal meshes. Comput. Aided Geom. Des. 2010, 27, 580–591. [Google Scholar] [CrossRef]
  42. Guthe, M.; Borodin, P.; Klein, R. Fast and accurate Hausdorff distance calculation between meshes. J. WSCG 2005, 13, 41–48. [Google Scholar]
  43. Shewchuk, J.R. Triangle: Engineering a 2D quality mesh generator and Delaunay triangulator. In Applied Computational Geometry towards Geometric Engineering; Lin, M.C., Manocha, D., Eds.; Springer: Berlin/Heidelberg, Germany, 1996; pp. 203–222. [Google Scholar]
  44. Aspert, N.; Santa-Cruz, D.; Ebrahimi, T. Mesh: Measuring errors between surfaces using the Hausdorff distance. In Proceedings of the IEEE International Conference on Multimedia and Expo, Lausanne, Switzerland, 26–29 August 2002; Volume 1, pp. 705–708. [Google Scholar]
Table 1. Computation of d H ( A , B ) by various methods.
Table 1. Computation of d H ( A , B ) by various methods.
Set PairpMethodHomotopyMaximumMeanStdSecs
(box, ball ∩ orthant)2proj gradfalse1.82841.33140.407890.00163
(box, ball ∩ orthant)2proj gradtrue1.82841.82840.00.00349
(box, ball ∩ orthant)2Frank-Wolfefalse0.414210.165690.203940.000564
(box, ball ∩ orthant)2Frank-Wolfetrue0.414210.414210.00.00101
(box, ball ∩ orthant)2point cloudfalse1.81591.81590.00.625
(simplex, L1 ball)2proj gradfalse1.41421.41420.00.001
(simplex, L1 ball)2proj gradtrue1.41421.41420.00.00272
(simplex, L1 ball)2Frank-Wolfefalse1.41421.41420.00.000358
(simplex, L1 ball)2Frank-Wolfetrue1.41421.41420.00.0574
(simplex, L1 ball)2point cloudfalse1.41421.41420.00.639
(box, ball ∩ orthant)3proj gradfalse2.46411.63580.50531 7.34 × 10 5
(box, ball ∩ orthant)3proj gradtrue2.46412.46410.00.000605
(box, ball ∩ orthant)3Frank-Wolfefalse0.732050.317880.25266 7.08 × 10 5
(box, ball ∩ orthant)3Frank-Wolfetrue0.732050.732050.00.000145
(box, ball ∩ orthant)3point cloudfalse2.42212.42210.00.63
(simplex, L1 ball)3proj gradfalse1.73211.73210.0 5.86 × 10 6
(simplex, L1 ball)3proj gradtrue1.73211.73210.00.000158
(simplex, L1 ball)3Frank-Wolfefalse1.22471.22470.0 4.44 × 10 6
(simplex, L1 ball)3Frank-Wolfetrue1.22471.22470.00.00186
(simplex, L1 ball)3point cloudfalse1.7321.7320.00.615
(box, ball ∩ orthant)10proj gradfalse5.03.45760.731650.0001
(box, ball ∩ orthant)10proj gradtrue5.32465.32460.00.000666
(box, ball ∩ orthant)10Frank-Wolfefalse2.01.22880.36583 9.71 × 10 5
(box, ball ∩ orthant)10Frank-Wolfetrue2.16232.16230.00.000183
(box, ball ∩ orthant)10point cloudfalse4.81584.81580.00.629
(simplex, L1 ball)10proj gradfalse3.16233.16230.0 7.66 × 10 6
(simplex, L1 ball)10proj gradtrue3.16233.16230.00.000749
(simplex, L1 ball)10Frank-Wolfefalse2.66672.66670.0 7.83 × 10 6
(simplex, L1 ball)10Frank-Wolfetrue2.66672.66670.00.00196
(simplex, L1 ball)10point cloudfalse3.16263.16260.00.613
(box, ball ∩ orthant)100proj gradfalse15.24813.0160.697540.000621
(box, ball ∩ orthant)100proj gradtrue19.019.00.00.00126
(box, ball ∩ orthant)100Frank-Wolfefalse7.1246.00780.348770.00168
(box, ball ∩ orthant)100Frank-Wolfetrue9.09.00.00.000304
(box, ball ∩ orthant)100point cloudfalse15.59815.5980.00.629
(simplex, L1 ball)100proj gradfalse10.010.00.0 2.77 × 10 5
(simplex, L1 ball)100proj gradtrue10.010.00.00.000852
(simplex, L1 ball)100Frank-Wolfefalse9.84949.84940.0 2.21 × 10 5
(simplex, L1 ball)100Frank-Wolfetrue9.84949.84940.00.00299
(simplex, L1 ball)100point cloudfalse9.94939.94930.00.64
(box, ball ∩ orthant)1000proj gradfalse45.17443.6980.66520.00591
(box, ball ∩ orthant)1000proj gradtrue62.24662.2460.00.00875
(box, ball ∩ orthant)1000Frank-Wolfefalse22.08721.3490.33260.00256
(box, ball ∩ orthant)1000Frank-Wolfetrue30.62330.6230.00.00531
(box, ball ∩ orthant)1000point cloudfalse48.62748.6270.01.13
(simplex, L1 ball)1000proj gradfalse31.62331.6230.00.000377
(simplex, L1 ball)1000proj gradtrue31.62331.6230.00.0113
(simplex, L1 ball)1000Frank-Wolfefalse31.57531.5750.00.000303
(simplex, L1 ball)1000Frank-Wolfetrue31.57531.5750.00.013
(simplex, L1 ball)1000point cloudfalse31.59731.5970.01.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lange, K. Computation of the Hausdorff Distance between Two Compact Convex Sets. Algorithms 2023, 16, 471. https://doi.org/10.3390/a16100471

AMA Style

Lange K. Computation of the Hausdorff Distance between Two Compact Convex Sets. Algorithms. 2023; 16(10):471. https://doi.org/10.3390/a16100471

Chicago/Turabian Style

Lange, Kenneth. 2023. "Computation of the Hausdorff Distance between Two Compact Convex Sets" Algorithms 16, no. 10: 471. https://doi.org/10.3390/a16100471

APA Style

Lange, K. (2023). Computation of the Hausdorff Distance between Two Compact Convex Sets. Algorithms, 16(10), 471. https://doi.org/10.3390/a16100471

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop