Next Article in Journal
Strong Edge Coloring of K4(t)-Minor Free Graphs
Next Article in Special Issue
Predicting Sit-to-Stand Body Adaptation Using a Simple Model
Previous Article in Journal
Inequalities for the Windowed Linear Canonical Transform of Complex Functions
Previous Article in Special Issue
A License Plate Recognition System with Robustness against Adverse Environmental Conditions Using Hopfield’s Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Method to Solve the Monge–Kantarovich Problem Using Wavelet Analysis

by
Juan Rafael Acosta-Portilla
1,
Carlos González-Flores
2,
Raquiel Rufino López-Martínez
3 and
Armando Sánchez-Nungaray
3,*
1
Instituto de Investigaciones y Estudios Superiores Económicos y Sociales, Universidad Veracruzana, Veracruz 94294, Mexico
2
Escuela Superior de Ingeniería Mecánica y Eléctrica Zacatenco, Instituto Politécnico Nacional, Mexico City 07738, Mexico
3
Facultad de Matemáticas, Universidad Veracruzana, Veracruz 94294, Mexico
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(6), 555; https://doi.org/10.3390/axioms12060555
Submission received: 1 March 2023 / Revised: 23 April 2023 / Accepted: 30 April 2023 / Published: 4 June 2023
(This article belongs to the Special Issue Optimization Models and Applications)

Abstract

:
In this paper, we present and justify a methodology to solve the Monge–Kantorovich mass transfer problem through Haar multiresolution analysis and wavelet transform with the advantage of requiring a reduced number of operations to carry out. The methodology has the following steps. We apply wavelet analysis on a discretization of the cost function level j and obtain four components comprising one corresponding to a low-pass filter plus three from a high-pass filter. We obtain the solution corresponding to the low-pass component in level j 1 denoted by μ j 1 * , and using the information of the high-pass filter components, we get a solution in level j denoted by μ ^ j . Finally, we make a local refinement of μ ^ j and obtain the final solution μ j σ .
MSC:
42C40; 49Q20; 65T60

1. Introduction

In recent years, schemes to approximate infinite linear programs have become very important in theory. The authors of [1] showed that under suitable assumptions, the program’s optimum value can be approximated by the values of finite-dimensional linear programs and that every accumulation point of a sequence of optimal solutions for the approximating programs is an optimal solution for the original problem. In particular, in [2] the authors studied the Monge–Kantorovich mass transfer (MT) problem on metric spaces. They considered conditions under the MT problem as solvable and, furthermore, that an optimal solution can be obtained as the weak limit of a sequence of optimal solutions to suitably approximate MT problems.
Moreover, in [3], the authors presented a numerical approximation for the value of the mass transfer (MT) problem on compact metric spaces. A sequence of transportation problems was built, and it proved that the value of the MT problem is a limit of the optimal values of these problems. Moreover, they gave an error bound for the numerical approximation. A generalization of this scheme of approximation was presented in [4,5]. They proposed an approximation scheme for the Monge–Kantorovich (MK) mass transfer problem on compact spaces that consisted of reducing to solve a sequence of finite transport problems. The method presented in that work uses a metaheuristic algorithm inspired by a scatter search in order to reduce the dimensionality of each transport problem. Finally, they provided some examples of that method.
On the other hand, the authors of [6] provided orthonormal bases for L 2 ( R n ) that have properties that are similar to those enjoyed by the classical Haar basis for L 2 ( R ) . For example, each basis consists of appropriate dilates and translates of a finite collection of “piecewise constant” functions. The construction is based on the notion of multiresolution analysis and reveals an interesting connection between the theory of compactly supported wavelet bases and the theory of self-similar tilings. Recent applications of the wavelet filter methodology have been used in various problems arising in communication systems and detection of thermal defects (see, for example, [7,8], respectively).
In [9], the authors gave a scheme to approximate the MK problem based on the symmetries of the underlying spaces. They took a Haar-type MRA constructed according to the geometry of the spaces. Thus, they applied the Haar-type MRA based on symmetries to the MK problem and obtained a sequence-of-transport problem that approximates the original MK problem for each MRA space. Note that in the case of Haar’s classical wavelet, this methodology coincides with the methods presented in [2,3].
It is important to note that various scientific problems are modeled through the Monge–Kantorovich approach; therefore, providing new efficient methodologies to find approximations of such problems turns out to be very useful. Within the applications of problems whose solutions are the Monge–Kantorovich problem are found: the use of the transport problem for the analysis of elastic image registration (see, for example, [10,11,12]). Other optimization problems related to this topic and differential equation tools can be found in recent works such as [13,14].
The main goal of this paper is to present a scheme of approximation of the MK problem based on wavelet analysis in which we use wavelet filters to split the original problem. That is, we apply the filter to the discrete cost function in level j, which results in a cost function of level j 1 and three components of wavelet analysis. Using the information of the cost function given by the low-pass filter, which belongs to level j 1 , we construct μ j 1 * a solution of the MK problem for that level j 1 , and using the additional information, the other three components of wavelet analysis are extended to μ ^ j , which is a solution to level j, where the projection of μ ^ j to level j 1 is μ j 1 * . Finally, we make a local analysis of the solution μ ^ j to obtain an improved solution based on the type of points of that solution (we have two type of points that are defined in the base in the connectedness of the solution).
This work has three non-introductory sections. In the first of them we present the Haar multiresolution analysis (MRA) in one and two dimensions. Next, we relate this to the absolutely continuous measures over a compact in R 2 . We finish with the definition of the Monge–Kantorovich mass transfer problem and its relation to the MRA.
In the second section, we define a proximity criterion for the components of the support of the simple solutions of the MK problem and study in detail the problem of, given a solution μ j 1 * at level j 1 of resolution for the MK problem, construct a feasible solution μ ^ j for the MK problem at level of resolution j such that it is a refinement of the solution with lees resolution.
On the other hand, in the third section we present a methodological proposal to solve the MK j problem such that it can be summarized in a simple algorithm of six steps:
Step 1.
We consider a discretization of the cost function for the level j, denoted by c j .
Step 2.
We apply the wavelet transform to c j ; we obtain the low-pass component c j 1 and three high-pass components, denoted by Ψ 1 , Ψ 2 and Ψ 3 , respectively.
Step 3.
Using c j 1 and the methodology of [3,4,9], we obtain a solution μ j 1 * for M K j 1 associated with this cost function.
Step 4.
We classify the points of the support of the solution μ j 1 * by proximity criteria as points of Type I or Type II.
Step 5.
Using the solution μ j 1 * , the information of the high-pass components and Lemma 1, we obtain a feasible solution for the level j, which is denoted by μ ^ j . This feasible solution has the property that its projection to the level j 1 is equal to μ j 1 * ; moreover, the support of μ ^ j is contained in the support of μ j 1 * .
Step 6.
The classification of the points of μ j 1 * induce classification of the points in μ ^ j by contention in the support. Over the points of Type I of the solution μ ^ j , we do not move those points. For the points of Type II, we apply a permutation to the solution over the two points that better improves the solution, and we repeat the process with the rest of the points.
Finally, we present a series of examples that use the proposed methodology based on wavelet analysis and compare their results with those obtained applying the methodology of [3,4,9].

2. Preliminaries

2.1. One-Dimensional MRA

The results of this and the following subsection are well known, and for a detailed exposition, we recommend consulting [15,16,17]. We begin by defining a general multiresolution analysis and developing the particular case of the Haar multiresolution analysis on R . Given a > 0 and b R , the dilatation operator D a and the translation operator T b are defined by
( D a f ) ( x ) = a 1 / 2 f ( a x ) and ( T b f ) ( x ) = f ( x b )
for every f L 2 ( R ) , where the latter denotes the usual Hilbert space of square integrable real functions defined on R . A multiresolution analysis (MRA) on R is a sequence of subspaces ( V j ) j Z of L 2 ( R ) such that it satisfies the following properties:
(1)
V j V j + 1 for every j Z .
(2)
L 2 ( R ) = span ¯ j Z V j .
(3)
j Z V j = { 0 } .
(4)
V j = D 2 j V 0 .
(5)
There exists a function φ L 2 , called the scaling function, such that the collection { T j φ } j Z is an orthonormal system of translates and
V 0 = span ¯ { T j φ } i Z .
We denote as χ A the characteristic function of the set A. Then, the Haar scaling function is defined by
φ ( x ) = χ [ 0 , 1 ) ( x ) .
For each pair j , k Z , we call
I j , k = [ 2 j k , 2 j ( k + 1 ) ) .
Hence, we define the function
φ j , k ( x ) = ( T k D 2 j φ ) ( x ) = 2 j / 2 φ ( 2 j x k ) = 2 j / 2 χ I j , k ( x ) .
The collection { φ j , k } j , k Z is called the system of Haar scaling functions. For j 0 Z , the collection { φ j 0 , k } k Z is referred to as the system of scale j 0 Haar scaling functions. The Haar function is defined by
ψ ( x ) = χ [ 0 , 1 / 2 ) ( x ) χ [ 1 / 2 , 1 ) ( x ) .
For each pair j , k Z , we define the function
ψ j , k ( x ) = ( T k D 2 j ψ ) ( x ) = 2 j / 2 ψ ( 2 j x k ) = 2 j / 2 χ I j + 1 , 2 k χ I j + 1 , 2 k + 1 ( x ) .
The collection { ψ j , k } j , k Z is referred to as the Haar system on R . For j 0 Z , the collection { ψ j 0 , k } k Z is referred to as the system of scale j 0 Haar functions. It is well known that with respect to the usual inner product · , · in L 2 ( R ) , the Haar system on R is an orthonormal system. Moreover, for each j 0 Z , the collection of scale j 0 Haar scaling functions is an orthonormal system. Thus, for each j Z , the approximation operator P j on L 2 ( R ) is defined by
( P j f ) ( x ) = k f , φ j , k φ j , k ( x ) , for all f L 2 ( R ) ,
and the approximation space V j by
V j = span ¯ { φ j , k } k Z .
The collection { V j } j Z is called Haar multiresolution analysis. Similarly, we have that for each j 0 Z , the detail operator Q j on L 2 ( R ) is defined by
( Q j f ) ( x ) = ( P j + 1 f ) ( x ) ( P j f ) ( x ) , for all f L 2 ( R ) ,
and the wavelet space W j by
W j = span ¯ { ψ j , k } k Z .
Note that V j + 1 = V j W j for all j Z . Hence, the collection
{ φ j 0 , k , ψ j , k : j j 0 , k Z }
is a complete orthonormal system on R ; this system is called the scale j 0 Haar system on R . As a consequence, the Haar system { ψ j , k } j , k Z is a complete orthonormal system on R .

2.2. Two-Dimensional MRA

To obtain the Haar MRA on R 2 , we consider the Haar MRA on R defined in the previous subsection with scaling and Haar functions φ and ψ , and from them, through a tensor product approach we can construct a two-dimensional scaling and Haar function. First, we define the four possible products:
Φ ( x , y ) = φ ( x ) φ ( y ) , Ψ ( 1 ) ( x , y ) = φ ( x ) ψ ( y ) Ψ ( 2 ) ( x , y ) = ψ ( x ) φ ( y ) , Ψ ( 3 ) ( x , y ) = ψ ( x ) ψ ( y ) ,
which are the scaling function associated with the unitary square and three Haar functions, respectively. Hence, for each j , k 1 , k 2 Z , we define naturally the scaling and Haar function systems:
Φ j , k 1 , k 2 ( x , y ) = φ j , k 1 ( x ) φ ( y ) j , k 2 , Ψ j , k 1 , k 2 ( 1 ) ( x , y ) = φ ( x ) j , k 1 ψ ( y ) j , k 2 Ψ j , k 1 , k 2 ( 2 ) ( x , y ) = ψ ( x ) j , k 1 φ ( y ) j , k 2 , Ψ j , k 1 , k 2 ( 3 ) ( x , y ) = ψ j , k 1 ( x ) ψ ( y ) j , k 2 .
Then for j 0 Z , as in the one-dimensional case, we have that the collection
{ Φ j 0 , k 1 , k 2 : k 1 , k 2 Z } { Ψ j , k 1 , k 2 ( i ) : 1 i 3 , j j 0 }
is an orthonormal basis on L 2 ( R 2 ) . Thus, the collection
{ Ψ j , k 1 , k 2 ( i ) : 1 i 3 , j , k 1 , k 2 Z }
is an orthonormal basis on L 2 ( R 2 ) . Then for j Z and f L 2 ( R 2 ) , the approximation operator is defined by
( P j f ) ( x , y ) = k 1 k 2 f , Φ j , k 1 , k 2 Φ j , k 1 , k 2 ( x , y ) ,
and for i = 1 , 2 , 3 , the detail operators are
( Q j ( i ) f ) ( x , y ) = k 1 k 2 f , Ψ j , k 1 , k 2 ( i ) Ψ j , k 1 , k 2 ( i ) ( x , y ) .
Hence, the projection can be written as
( P j + 1 f ) ( x , y ) = ( P j f ) ( x , y ) + ( Q j ( 1 ) f ) ( x , y ) + ( Q j ( 2 ) f ) ( x , y ) + ( Q j ( 3 ) f ) ( x , y ) .
We will describe the approximation P j and detail operators Q j ( i ) from the geometric point of view. First of all, we fix some j , k 1 , k 2 Z and define the square
S ( j , k 1 , k 2 ) = I j , k 1 × I j , k 2 = [ 2 j k 1 , 2 j ( k 1 + 1 ) ) × [ 2 j k 2 , 2 j ( k 2 + 1 ) ) .
Then, we have that Φ j , k 1 , k 2 is the characteristic function of S ( j , k 1 , k 2 ) , in symbols
Φ j , k 1 , k 2 ( x , y ) = χ S ( x , y )
where S = S ( j , k 1 , k 2 ) . Therefore for f L 2 ( R 2 ) , the operator P j f acts as a discretization of f that is constant over the disjointed S ( j , k 1 , k 2 ) squares. On the other hand, we can split S ( j , k 1 , k 2 ) as follows (see Figure 1):
(i)
Into two rectangles with half the height and the same width as S ( j , k 1 , k 2 ) , namely, the sets
A 1 ( j , k 1 , k 2 ) = [ 2 j k 1 , 2 j ( k 1 + 1 ) ) × [ 2 j 1 2 k 2 , 2 j 1 ( 2 k 2 + 1 ) ) and A 2 ( j , k 1 , k 2 ) = [ 2 j k 1 , 2 j ( k 1 + 1 ) ) × [ 2 j 1 2 k 2 + 1 , 2 j 1 ( 2 k 2 + 2 ) ) .
(ii)
Into two rectangles with the same height and half the width as S ( j , k 1 , k 2 ) , namely,
B 1 ( j , k 1 , k 2 ) = [ 2 j 1 2 k 1 , 2 j 1 ( 2 k 1 + 1 ) ) × [ 2 j k 2 , 2 j ( k 2 + 1 ) ) and B 2 ( j , k 1 , k 2 ) = [ 2 j 1 2 k 1 + 1 , 2 j 1 ( 2 k 1 + 2 ) ) × [ 2 j k 2 , 2 j ( k 2 + 1 ) ) .
(iii)
Into four squares with half the side lengths of S ( j , k 1 , k 2 ) , namely,
C 1 ( j , k 1 , k 2 ) = [ 2 j 1 2 k 1 , 2 j 1 ( 2 k 1 + 1 ) ) × [ 2 j 1 2 k 2 , 2 j 1 ( 2 k 2 + 1 ) ) , C 2 ( j , k 1 , k 2 ) = [ 2 j 1 2 k 1 + 1 , 2 j 1 ( 2 k 1 + 2 ) ) × [ 2 j 1 2 k 2 , 2 j 1 ( 2 k 2 + 1 ) ) , C 3 ( j , k 1 , k 2 ) = [ 2 j 1 2 k 1 , 2 j 1 ( 2 k 1 + 1 ) ) × [ 2 j 1 2 k 2 + 1 , 2 j 1 ( 2 k 2 + 2 ) ) and C 4 ( j , k 1 , k 2 ) = [ 2 j 1 2 k 1 + 1 , 2 j 1 ( 2 k 1 + 2 ) ) × [ 2 j 1 2 k 2 + 1 , 2 j 1 ( 2 k 2 + 2 ) ) .
Hence, for i = 1 , 2 , 3 , the function Ψ j , k 1 , k 2 ( i ) is defined by
Ψ j , k 1 , k 2 ( 1 ) ( x , y ) = χ A 1 ( x , y ) χ A 2 ( x , y ) with A l = A l ( j , k 1 , k 2 ) for l = 1 , 2 . Ψ j , k 1 , k 2 ( 2 ) ( x , y ) = χ B 1 ( x , y ) χ B 2 ( x , y ) with B l = B l ( j , k 1 , k 2 ) for l = 1 , 2 . Ψ j , k 1 , k 2 ( 3 ) ( x , y ) = χ C 1 C 4 ( x , y ) χ C 2 C 3 ( x , y ) with C l = C l ( j , k 1 , k 2 ) for l = 1 , 2 , 3 , 4 .
(see Figure 1). Thus, the image of f L 2 ( R 2 ) under the detail operator Q j ( i ) is a function Q j ( i ) f formed by pieces that oscillate symmetrically on each square S ( j , k 1 , k 2 ) .
Now we have the elements to define an MRA on R 2 ; to do this, we will use the notation introduced for the one-dimensional MRA on R defined in the previous subsection. For each j Z , we define
V j = V j V j .
Then the collection { V j } j Z is the Haar multiresolution analysis on R 2 , where the dilatation and translation operators are defined by
D 2 j = D 2 j D 2 j and T m , n = T m T n ,
respectively. Note that we have the following relation:
V j + 1 = V j W j ( 1 ) W j ( 2 ) W j ( 3 ) ,
where W j ( 1 ) = V j W j , W j ( 2 ) = W j V j and W j ( 3 ) = W j W j . For more detail with respect to the Haar MRA, see [15].

2.3. Measures and MRA

In this subsection, we will use the two previous ones and the [9] approach to relate measures over R 2 with the Haar MRA on R 2 . The results and definitions presented in this and the following subsection can be found in the classical references [18,19]. We consider a compact subset X of R 2 and a measure μ such that it is absolutely continuous with respect to the Lebesgue measure λ . We call
f μ = d μ d λ
the Radon–Nikodym derivative of μ with respect to λ . By construction, it necessary holds that f μ L 1 ( X ) . We additionally suppose that f μ L 2 ( X ) . Then, as a consequence of the Haar MRA on R 2 , we have that
P j f μ f μ 2 0 as j .
Moreover, the compactness of X ensures that
P j f μ f μ 1 λ ( X ) 1 / 2 P j f μ f μ 2 .
The above allow us to define μ j , the approximation of the measure μ to the level j Z of the Haar MRA on R 2 , as the measure induced by the projection of the Radon–Nikodym to the level j. That is, μ j is defined by the relation
d μ j = P j f μ d λ .
We denote the expectation of a function g L ( R 2 ) with respect to λ as
E [ g ] = R 2 g d λ .
Then Theorem 4 and Corollary 5 in [9] ensure that for each g L 2 ( R 2 ) and j Z , it is fulfilled that
E [ P j g ] = E [ g ] .
Thus, μ j is absolutely continuous with respect to the Lebesgue measure. If, additionally, we suppose that μ is a measure with support on X, then ( μ j ) converges to μ in the L 1 and L 2 sense. That is, by the Riesz Theorem, we can associate each of these measures μ j to an integrable function μ ^ j P j L 2 ( R 2 ) such that for each Lebesgue measurable set A R 2 ,
μ j ( A ) = A μ ^ j d λ = μ ^ j χ A d λ = E [ μ ^ j χ A ] ,
and we apply the respective convergence mode. Further, the compact support of measures ensures that the sequence ( μ j ) converges weakly to the measure μ , proof of which can be found in [9] Theorem 7.

2.4. M-K Problem and MRA

In this subsection, we study the Monge–Kantorovich problem from the point of view of the Haar MRA on R 2 (for a detailed exposition, we recommend consulting [9]). Let X and Y be two compact subsets of R 2 . We denote by M + ( X × Y ) the family of finite measures on X × Y . Given μ M + ( X × Y ) , we denote its marginal measures on X and Y as
Π 1 μ ( E 1 ) = μ ( E 1 × Y )
and
Π 2 μ ( E 2 ) = μ ( X × E 2 )
for each μ -measurable set E 1 X and E 2 Y . Let c be a real function defined on X × Y , and η 1 , η 2 are two measures defined on X and Y, respectively. The Monge–Kantorovich mass transfer problem is given as follows:
MK : minimize μ , c : = c d μ subject to : Π 1 μ = η 1 , Π 2 μ = η 2 , μ M + ( X × Y ) .
A measure μ M + ( X × Y ) is said to be a feasible solution for the MK problem if it satisfies (38) and μ , c is finite. We said that the MK problem is solvable if there is a feasible solution μ * that attains the optimal value for it. So μ * is called an optimal solution for (38). If, additionally, we assume that μ , η 1 and η 2 are absolutely continuous with respect to the Lebesgue measures on R 2 and R , then in a natural way, we can discretize the MK problem through the Haar MRA on R 2 as follows. For j Z , we define the MK problem of level j by:
MK j : minimize μ j , c : = c d μ j subject to : Π 1 μ j = η j 1 , Π 2 μ j = η j 2 , μ M + ( X × Y ) .
where μ j , η j 1 and η j 2 are the projections to level j of the measures μ , η 1 and η 2 , respectively, to the Haar MRA.

3. Technical Results

In this section, we present a series of results that ensure the good behavior of the methodological proposal of the next section. In order to do this, we start by assuming an MK problem with cost function c = c ( x , y ) , base sets X = Y = [ 0 , 1 ] and measure restrictions η 1 = η 2 = λ | [ 0 , 1 ] . In other words, we consider the problem of moving a uniform distribution to a uniform one with the minimum movement cost. Since in applications we work with discretized problems, then as a result of applying the MRA on R 2 , we have that our objective is to solve:
MK j : minimize i , k μ i , k j c i , k j subject to : i μ i , k j = 1 2 j , for each k = 1 , , 2 j . k μ i , k j = 1 2 j , for each i = 1 , , 2 j . i , k μ i , k j = 1 . μ i , k j 0 , for each i , k = 1 , , 2 j .
where μ i , k j is the portion of the initial value 1 2 j in the position I j , i of the x-axis allocated to the position I j , k of the y-axis. We call j-discrete unit square the grid formed by the squares S ( j , k 1 , k 2 ) (see (20)), dividing the set [ 0 , 1 ] × [ 0 , 1 ] in 2 j × 2 j blocks, in a such way that each one is identified with the point ( k 1 , k 2 ) . We suppose that there is a simple solution μ for (40). That is, μ is a feasible solution such that, given i 0 , k 0 { 1 , , 2 j } with μ i 0 , k 0 0 , it necessarily holds that μ i 0 , k = 0 for each k k 0 and μ i , k 0 = 0 for each i i 0 . Geometrically, if the measure μ is plotted as a discrete heat map in the j-discrete unit square, then no color element in the plot has another color element in its same row and column, as can be seen in Figure 2.
Definition 1. 
We define a proximity criteria in the j-discrete unit square as follows: ( i , k ) is a neighbor of ( l , m ) if
| i l | = 1 or | k m | = 1 .
In Figure 2, we plot the support of the hypothetical simple solution μ . Hence, the neighbors of the position in the middle of the cross are those that touch the yellow stripes. Then in this example, the middle point has four neighbors.
Figure 2. Support of μ and the proximity criteria.
Figure 2. Support of μ and the proximity criteria.
Axioms 12 00555 g002
With this in mind, we can classify the points in support ( μ ) as follows.
Definition 2. 
We say that ( k 1 , k 2 ) support ( μ ) is a border point if k 1 or k 2 equals 0 or 1; otherwise, we call it an interior point. It is clear that a border point has at least one neighbor and at most three, whereas an interior point has at least two neighbors and at most four. Hence, we can partition support ( μ ) into two sets as follows.
The set of the points of Type I is given by
T 1 = { ( i , k ) support ( μ ) | ( i , k ) has minimun neighbors } ,
and the set of the points of Type II is given by
T 2 = support ( μ ) \ T 1 .
Intuitively, the set T 1 is composed of well-controlled points, whereas the set T 2 has the points that admit permutations between them, since, as we will see in the next section, in the proposed algorithm they will be permuted. See Figure 3 and Figure 4. Naturally, since μ is a feasible solution for (40), then given elements ( k 1 , k 2 ) , ( m 1 , m 2 ) support ( μ ) and for i = 1 , 2 permutations σ i over { k i , m i } , the measure μ σ defined by
μ σ ( a , b ) = μ ( σ 1 ( a ) , σ 2 ( b ) )
is a feasible solution.
Figure 3. Classification of the points in support ( μ ) : Type I points.
Figure 3. Classification of the points in support ( μ ) : Type I points.
Axioms 12 00555 g003
Figure 4. Classification of the points in support ( μ ) : Type II points.
Figure 4. Classification of the points in support ( μ ) : Type II points.
Axioms 12 00555 g004

Refining Projections

In this subsection, we study the problem of improving an optimal solution μ j 1 * for (40) on level j 1 Z to a feasible solution μ ^ j for the next level j. Let μ j 1 * be an optimal solution for level j 1 . Then we are looking for μ ^ j such that:
( 1 ) μ ^ j is a feasible solution . ( 2 ) P j 1 μ ^ j = μ j 1 * .
As described in the previous section, the measure μ ^ j L 2 ( R 2 ) can be decomposed in
μ ^ j = μ j 1 * + ν 1 + ν 2 + ν 3
where
ν i ( x , y ) = ( Q j ( i ) μ ^ j ) ( x , y ) = k 1 k 2 μ ^ j , Ψ j , k 1 , k 2 ( i ) Ψ j , k 1 , k 2 ( i ) ( x , y ) .
From the geometric point of view, the projections ν i are formed from differences of characteristic functions, as we mentioned in Section 2.2. So we have the following result:
Lemma 1. 
Let j Z and μ j 1 * be an optimal solution for (40) at level j 1 . Then for each positive measure μ ^ j P j L 2 ( R 2 ) such that P j 1 μ ^ j = μ j 1 * and μ ^ j = μ j 1 * + ν 1 + ν 2 + ν 3 , it necessarily holds that
μ j 1 * ( A ) = 0 implies ν i ( A ) = 0
for each Lebesgue measurable set A R 2 and each i = 1 , 2 , 3 . Therefore, the support of μ ^ j is contained in the support of μ j 1 * .
Proof. 
We only make the proof for the case i = 1 , since the other two are very similar. To simplify the notation, we use the symbols μ and ν as measures or functions in the respective subspace of L 2 ( R 2 ) . Since when setting a level j Z all the measures in question are constant in pairs of rectangles dividing S ( j , k 1 , k 2 ) , as we prove in Section 2.2, then it is enough to prove that (65) is valid on this rectangles. Let
A 1 = A 1 ( j , k 1 , k 2 ) = [ 2 j k 1 , 2 j ( k 1 + 1 ) ) × [ 2 j 1 2 k 2 , 2 j 1 ( 2 k 2 + 1 ) ) .
and
A 2 = A 2 ( j , k 1 , k 2 ) = [ 2 j k 1 , 2 j ( k 1 + 1 ) ) × [ 2 j 1 2 k 2 + 1 , 2 j 1 ( 2 k 2 + 2 ) ) ,
as in (22). Then for l = 1 , 2 , we have that
μ ^ j ( A l ) = μ j 1 * ( A l ) + i = 1 3 ν i ( A l ) = E [ μ j 1 * χ A l ] + i = 1 3 E [ ν i χ A l ] .
Now we will calculate each one of the expectations separately. By (17), (18) and (25), we have that
μ j 1 * ( x , y ) = k 1 k 2 μ j 1 * , Φ j , k 1 , k 2 Φ j , k 1 , k 2 ( x , y )
and
ν 1 ( x , y ) = k 1 k 2 ν 1 , Ψ j , k 1 , k 2 ( 1 ) Ψ j , k 1 , k 2 ( 1 ) ( x , y ) = k 1 k 2 ν 1 , Ψ j , k 1 , k 2 ( 1 ) [ χ A 1 ( j , k 1 , k 2 ) ( x , y ) χ A 2 ( j , k 1 , k 2 ) ( x , y ) ] .
Then for l = 1 and by (15), (16) and (22), we have that
E [ μ j 1 * χ A 1 ] = E [ χ A 1 k 1 k 2 μ j 1 * , Φ j , k 1 , k 2 Φ j , k 1 , k 2 ] = μ j 1 * , Φ j , k 1 , k 2 E [ χ A 1 Φ j , k 1 , k 2 ] ] = d E [ χ A 1 ] = d 1 2 2 j + 1
where d = μ j 1 * , Φ j , k 1 , k 2 and
E [ ν 1 χ A 1 ] = E [ χ A 1 k 1 k 2 ν 1 , Ψ j , k 1 , k 2 ( 1 ) ( χ A 1 ( j , k 1 , k 2 ) χ A 2 ( j , k 1 , k 2 ) ) ] = ν 1 , Ψ j , k 1 , k 2 ( 1 ) E χ A 1 = c E [ χ A 1 ] = c 1 2 2 j + 1
where c = ν 1 , Ψ j , k 1 , k 2 ( 1 ) . Similarly, but using (23) and (24), we can prove that
E [ ν 2 χ A 1 ] = 0
and
E [ ν 3 χ A 1 ] = 0 .
Then
μ ^ j ( A 1 ) = d 1 2 2 j + 1 + c 1 2 2 j + 1 = μ j 1 * ( A 1 ) + ν ( A 1 )
Now, we will make an analogous argument for the case l = 2 . Hence,
E [ μ j 1 * χ A 2 ] = E [ χ A 2 k 1 k 2 μ j 1 * , Φ j , k 1 , k 2 Φ j , k 1 , k 2 ] = μ j 1 * , Φ j , k 1 , k 2 E [ χ A 2 Φ j , k 1 , k 2 ] ] = d E [ χ A 2 ] = d 1 2 2 j + 1
and
E [ ν 1 χ A 2 ] = E [ χ A 2 k 1 k 2 ν 1 , Ψ j , k 1 , k 2 ( 1 ) ( χ A 1 ( j , k 1 , k 2 ) χ A 2 ( j , k 1 , k 2 ) ) ] = ν 1 , Ψ j , k 1 , k 2 ( 1 ) E χ A 2 = c E [ χ A 2 ] = c 1 2 2 j + 1 .
Thus,
μ ^ j ( A 2 ) = d 1 2 2 j + 1 c 1 2 2 j + 1 = μ j 1 * ( A 2 ) + ν ( A 2 )
and by (54), (55), (59) and (60), we have that
μ j 1 * ( A 1 ) = μ j 1 * ( A 2 )
and
ν 1 ( A 1 ) = ν 1 ( A 2 ) .
Therefore, it follows from (58), (61), (62), (63) and the fact that μ ^ j is a positive measure, that
| ν 1 ( A 1 ) | = | ν 1 ( A 2 ) | | μ j 1 * ( A 1 ) | = | μ j 1 * ( A 2 ) |
From which we conclude that μ j 1 * ( A ) = 0 implies ν 1 ( A ) = 0 . Similarly, it can be shown that μ j 1 * ( A ) = 0 implies ν i ( A ) = 0 for i = 2 , 3 ; for this, analogous proofs are carried out, with the difference being that for i = 2 , the sets to be considered are B 1 and B 2 as in (23), whereas C 1 C 4 and C 2 C 3 as in (24) are the respective sets when i = 3 . □
We have proved that if it is intended to go back to the preimage of the projection of the approximation operator P from a level j 1 to a level j, the support of the level j 1 delimits that of the j level. Now, we will prove that for every measure μ ^ j that satisfies (45) and (46), it necessarily holds that ν 1 = ν 2 = 0 .
Lemma 2. 
Let j Z and μ j 1 * be an optimal simple solution for (40) at level j 1 . Then for each feasible solution μ ^ j P j L 2 ( R 2 ) to (40) at level j such that P j 1 μ ^ j = μ j 1 * and μ ^ j = μ j 1 * + ν 1 + ν 2 + ν 3 . It necessarily holds that
ν 1 = ν 2 = 0 .
Proof. 
In order to perform this proof, we use the restrictions of the MK problem (40), which in turn, are related to the marginal measures. Therefore, we will only complete the proof for one of the projections, since the other is analogous. From the linearity of the Radon–Nikodym derivative, it follows that
η j 1 = Π 1 μ ^ j = Π ( μ j 1 * + ν 1 + ν 2 + ν 3 ) = Π μ j 1 * + Π ν 1 + Π ν 2 + Π ν 3 .
Let k Z and I j + 1 , 2 k = [ 2 j 1 2 k , 2 j 1 ( 2 k + 1 ) ) . Thus, by (39) and (66), we have that
Π 1 μ ^ j ( I j + 1 , 2 k ) = η j 1 ( I j + 1 , 2 k ) = Π 1 μ j 1 * ( I j + 1 , 2 k ) = α .
That is, we are evaluating feasible solutions on rectangles whose height is half the size of the squares with which they are discretized at the j level of the Haar MRA. Now, we will develop in detail (66) evaluated on I j , k . Since μ j 1 * is a simple solution, we call l Z the only number such that μ j 1 + ( S ( j , k , l ) ) > 0 , where S ( j , k 1 , k 2 ) is defined as in (20). With the aim of simplifying the notation, we define I = I j + 1 , 2 k × R . By (16) and (25), we have that
Π 1 ν 1 ( I j + 1 , 2 k ) = ν i ( I ) = E [ χ I ν 1 ] = E [ χ I k 1 k 2 ν 1 , Ψ j , k 1 , k 2 ( 1 ) ( χ A 1 ( j , k 1 , k 2 ) χ A 2 ( j , k 1 , k 2 ) ) ] = E [ χ I k 2 ν 1 , Ψ j , k , k 2 ( 1 ) ( χ A 1 ( j , k , k 2 ) χ A 2 ( j , k , k 2 ) ) ] = E [ χ I ν 1 , Ψ j , k , l ( 1 ) ( χ A 1 ( j , k , l ) χ A 2 ( j , k , l ) ) ]
By the way I was defined, necessarily in the last equality it must be fulfilled that one of the terms in the expectation is equal to 0. Hence, it is fulfilled that
Π 1 ν 1 ( I j + 1 , 2 k ) = ν 1 , Ψ j , k , l ( 1 ) E [ χ I ] = 1 2 2 j + 1 ν 1 , Ψ j , k , l ( 1 ) .
By a similar argument, it can be proved that
Π ν 2 ( I j + 1 , 2 k ) = 0
and
Π ν 2 ( I j + 1 , 2 k ) = 0 .
Then from (66) to (71), it follows that
α = α + 1 2 2 j + 1 ν 1 , Ψ j , k , l ( 1 ) + 0 + 0 .
Hence,
ν 1 , Ψ j , k , l ( 1 ) = 0 .
Therefore, ν 1 = 0 . In a similar way, we can prove that ν 2 = 0 . □
Suppose we have a simple optimal solution μ j 1 * for the MK problem discretized through the Haar MRA at level j 1 Z and that we are interested in refining that solution to the next level j. By Lemmas 1 and 2, any μ ^ j that satisfies (45) has its support contained in the support of μ j 1 * and has components ν 1 = ν 2 = 0 . Then the problem of constructing a feasible solution μ ^ j such that it refines μ j 1 * is reduced to construct
μ ^ j ( x , y ) = μ j 1 * ( x , y ) + ν 3 ( x , y ) = μ j 1 * ( x , y ) + k 1 k 2 μ ^ j , Ψ j , k 1 , k 2 ( i ) Ψ j , k 1 , k 2 ( i ) ( x , y ) ,
which is equivalent to chosing for each k 1 , k 2 Z a value ν k 1 , k 2 3 such that
μ ^ k 1 , k 2 j = μ k 1 , k 2 * , j 1 + ν k 1 , k 2 3 .
By Lemma 1 for each k 1 , k 2 Z , it is fulfilled that
| ν k 1 , k 2 3 | μ k 1 , k 2 * , j 1 .
Therefore, the choice of ν k 1 , k 2 3 is restricted to a compact collection, and since μ ^ j is a solution of the linear program (40), then
| ν k 1 , k 2 3 | = μ k 1 , k 2 * , j 1 .
Thus, the sign of ν k 1 , k 2 3 must be such that it minimizes ν k 1 , k 2 3 c k 1 , k 2 j . That is,
ν k 1 , k 2 3 c k 1 , k 2 j 0

4. Methodological Proposal

In this section, we show through examples a process that builds solutions to the MK problem with a reduced number of operations. First, we consider the MK problem with cost function c : [ 0 , 1 ] × [ 0 , 1 ] R defined by
c ( x , y ) = 4 x 2 y x y 2
and homogeneous restrictions ν 1 , ν 2 over [ 0 , 1 ] . So that the algorithm can be graphically appreciated, we take a small level of discretization, namely j = 6 . Thus, in the Haar MRA over R 2 at level j = 6 , the cost function has the form shown in Figure 5, which can be stored in a vector of size 2 2 j = 2 2 ( 6 ) = 2 12 .
Now, we apply the filtering process to the cost function at level j = 6 , which results in four functions
c 6 ( x , y ) = c 5 ( x , y ) + Ψ ( 1 ) ( x , y ) + Ψ ( 2 ) ( x , y ) + Ψ ( 3 ) ( x , y ) ;
see Figure 6, Figure 7, Figure 8 and Figure 9.
Figure 5. Step 1. Discretization of the cost function to the level j, which is denoted by c j . In particular, the cost function is c ( x , y ) = 4 x 2 y x y 2 for lever j = 6 .
Figure 5. Step 1. Discretization of the cost function to the level j, which is denoted by c j . In particular, the cost function is c ( x , y ) = 4 x 2 y x y 2 for lever j = 6 .
Axioms 12 00555 g005
Step 2. Filtering the original discrete function using the high-pass filter, which yield three discrete functions denoted by Ψ 1 , Ψ 2 and Ψ 3 , that functions correspond to Figure 7, Figure 8 and Figure 9, respectively, each describing local changes in the original discrete function. It is then low-pass filtered to produce the approximate discrete function c 5 , which is given by Figure 6.
Figure 6. c 5 ( x , y ) .
Figure 6. c 5 ( x , y ) .
Axioms 12 00555 g006
Figure 7. Ψ ( 1 ) ( x , y ) .
Figure 7. Ψ ( 1 ) ( x , y ) .
Axioms 12 00555 g007
Figure 8. Ψ ( 2 ) ( x , y ) .
Figure 8. Ψ ( 2 ) ( x , y ) .
Axioms 12 00555 g008
Figure 9. Ψ ( 3 ) ( x , y ) .
Figure 9. Ψ ( 3 ) ( x , y ) .
Axioms 12 00555 g009
We then solve the MK problem for the level j 1 = 5 . That is, we find a measure μ j 1 * = μ 5 * that is an optimal solution for the MK problem with cost function c 5 . Such data can be stored in a vector of size 2 2 j 2 = 2 10 ; see Figure 10. For each entry k, the formal application that plots this vector in a square is defined by k ( k 1 + 1 , k 2 + 1 ) , where 0 k 1 , k 2 < 2 n 1 and k = k 1 · 2 j 1 + k 2 .
Since the measure μ j 1 * = μ 5 * is an optimal simple solution for the MK problem, then we can represent its support in a simple way, as we show below:
support ( μ j 1 * ) = k 1 , k 2 S k 1 , k 2
where S k 1 , k 2 = S ( j , k 1 , k 2 ) is the square in (20). Next, we split each block S k 1 , k 2 support ( μ j 1 * ) into four parts as in (24); see Figure 11.
From the technical point of view, in the discretization at level j 1 = 5 , we have a grid of 2 j 1 × 2 j 1 = 32 × 32 squares that we call S k 1 , k 2 and identify with the points ( k 1 , k 2 ) . Thus, we refine to a grid of 64 × 64 , splitting each square into four, which in the new grid are determined by points
( 2 k 1 1 , 2 k 2 ) , ( 2 k 1 1 , 2 k 2 1 ) , ( 2 k 1 , 2 k 2 1 ) and ( 2 k 1 , 2 k 2 ) ;
see Figure 12.
As we prove in Lemma 1, any feasible solution μ ^ j = μ ^ 6 that refines μ 5 * has its support contained in support ( μ 5 * ) . Therefore, we must only deal with the region delimited by the support of μ 5 * . By Lemma 2, in order to construct the solution μ ^ 6 , we only need to determine the values n k 1 , k 2 3 corresponding to the coefficients of the wavelet part ν 3 ; however, by (77) and (78), those values are well determined and satisfy that when added with the scaling part μ k 1 , k 2 * , the result is a scalar multiple of a characteristic function. For example if the square S k 1 , k 2 has scaling part coefficient μ k 1 , k 2 * = c , then we choose ν k 1 , k 2 3 = c . Hence, by (21) and (25), we have that
μ ^ k 1 , k 2 6 ( x , y ) = μ k 1 , k 2 * ( x , y ) + ν k 1 , k 2 3 ( x , y ) = c χ S ( x , y ) c [ χ C 1 C 4 ( x , y ) χ C 2 C 3 ( x , y ) ] = 2 c χ C 2 C 3 ( x , y ) .
Thus, from an operational point of view, we only need to chose between two options of supporting each division of square S k 1 , k 2 , as we illustrate in Figure 13 and Figure 14.
This coincides with our geometric intuition. Hence, the resulting feasible solution μ ^ 6 has its support contained in support ( μ 5 * ) , and its weight within each square S k 1 , k 2 is presented in a diagonal within that block; see Figure 15. Finally, we can improve μ ^ 6 by observing the way the filtering process acts. To do this, we apply the proximity criteria (41) and split the support of μ ^ 6 into points of Type I and II. In Figure 16, we identify the points of Type I and II of solution μ 5 * , whereas in Figure 15, we do the same but for μ ^ 6 .
Intuitively, the division of the support into points of Type I and II allows us to classify the points so that they have an identity function form and, consequently, that come from the discretization of a continuous function—points of Type I—and in points that come from the discretization of a discontinuous function—points of Type II. Thus, the points of Type I are located in such a way that they generate a desired solution, and therefore it is not convenient to move them, whereas Type II points are free to be changed as this does not lead to the destruction of a continuous structure in the solution. As we mentioned in the previous section, each permutation of rows or columns of one weighted element S k 1 , k 2 with another S k 1 , k 2 constructs a feasible solution; see (44). Thus, as a heuristic technique to improve the solution, we check the values c 6 , μ 6 σ associated with each solution μ 6 σ obtained by permuting rows or columns of points of Type II of the solution μ ^ 6 . We call μ 6 σ * the solution for which its permutation gives it the best performance. See Figure 17.
Figure 15. Step 5. Using the solution μ 5 * which is given by Figure 10, the information of the high-pass components (Figure 7, Figure 8 and Figure 9) and Lemma 1, we obtain a feasible solution for Level 6, which is denoted by μ ^ 6 .
Figure 15. Step 5. Using the solution μ 5 * which is given by Figure 10, the information of the high-pass components (Figure 7, Figure 8 and Figure 9) and Lemma 1, we obtain a feasible solution for Level 6, which is denoted by μ ^ 6 .
Axioms 12 00555 g015
Figure 16. Step 4. We classify the points of the support of the solution μ 5 * by proximity criteria as points of Type I or Type II (the measure μ 5 * corresponds to Figure 10).
Figure 16. Step 4. We classify the points of the support of the solution μ 5 * by proximity criteria as points of Type I or Type II (the measure μ 5 * corresponds to Figure 10).
Axioms 12 00555 g016
Finally, we present Table 1 that compare the solutions of the MK problem, in which MK 5 is the value associated with the optimal solution μ 5 * at level of discretization j 1 = 5 , MK 6 is the value associated with an optimal solution μ 6 * at level of discretization j = 6 , and MK 6 σ * is the value associated with the solution μ 6 σ * obtained by the heuristic method described in the previous paragraph.
Figure 17. Step 6. Classification of the points of μ 5 * induces classification of the points in μ ^ 6 by contention in the support. Over the points of Type I of the solution μ ^ 6 , we do not move those points. For the points of Type II, we apply a permutation to the solution over the two points that improve the solution and repeat the process with the rest of the points.
Figure 17. Step 6. Classification of the points of μ 5 * induces classification of the points in μ ^ 6 by contention in the support. Over the points of Type I of the solution μ ^ 6 , we do not move those points. For the points of Type II, we apply a permutation to the solution over the two points that improve the solution and repeat the process with the rest of the points.
Axioms 12 00555 g017

5. Other Examples of This Methodology

We conclude this work with a series of examples in which we apply the proposed methodology. Each of them is divided into the six-step algorithm introduced in the previous section and corresponds to a classical example existing in the literature.

5.1. Example with Cost Function c ( x , y ) = x 2 y x y 2

Let MK be the MK problem with cost function c ( x , y ) = x 2 y x y 2 and uniform restrictions η 1 and η 2 over the unitary interval [ 0 , 1 ] . In order to be more didactic, we consider the level of discretization j = 6 . Next, we present the reduced algorithm.
Step 1.
Discretize the cost function at level j—that is, over a grid of 2 j × 2 j . Figure 18.
Step 2.
Apply the filtering process to the cost function at level j 1 , obtaining the filtered cost function c 5 , which is plotted on a grid of 2 j 1 × 2 j 1 . Figure 19.
Step 3.
Find an optimal simple solution μ j 1 * for the discretized M K j problem. That is, solve for the cost function c 5 and obtain an optimal simple solution μ 5 * . Figure 20.
Step 4.
Apply the proximity criteria to the support of μ j 1 * . Figure 21.
Step 5.
Refine the optimal simple solution μ j 1 * to a feasible solution μ ^ j , dividing each weighted square S k 1 , k 2 at level j 1 into four squares at level j (see (82)) and place mass according to the criteria (83). Figure 22.
Step 6.
Permute the rows and columns of the points of Type II in support ( μ ^ j ) using (44) to construct feasible solutions μ j σ and chose the one that has better performance. Figure 23.
The following Table 2 contains the comparison of the proposed methodology with the two immediate levels of resolution.

5.2. Example with Cost Function c ( x , y ) = x y

Let MK be the MK problem with cost function c ( x , y ) = x y and uniform restrictions η 1 and η 2 over the unitary interval [ 0 , 1 ] . In order to be more didactic, we consider the level of discretization j = 6 . Next, we present the reduced algorithm.
Step 1.
Discretize the cost function at level j—that is, over a grid of 2 j × 2 j . Figure 24.
Step 2.
Apply the filtering process to the cost function at level j 1 , obtaining the filtered cost function c 5 , which is plotted on a grid of 2 j 1 × 2 j 1 . Figure 25.
Step 3.
Find an optimal simple solution μ j 1 * for the discretized M K j problem. That is, solve for the cost function c 5 and obtain an optimal simple solution μ 5 * . Figure 26.
Step 4.
Apply the proximity criteria to the support of μ j 1 * . Figure 27.
Step 5.
Refine the optimal simple solution μ j 1 * to a feasible solution μ ^ j , dividing each weighted square S k 1 , k 2 at level j 1 into four squares at level j (see (82)) and placing mass according to the criteria (83). Figure 28.
Step 6.
Permute the rows and columns of the points of Type II in support ( μ ^ j ) using (44) to construct feasible solutions μ j σ and chose the one that has better performance. Figure 29.
The following Table 3 contains the comparison of the proposed methodology with the two immediate levels of resolution.

5.3. Example with Cost Function c ( x , y ) = ( 2 y x 1 ) 2 ( 2 y x ) 2

We take the MK problem with cost function c ( x , y ) = ( 2 y x 1 ) 2 ( 2 y x ) 2 and homogeneous restrictions over the unitary interval. Again, we consider the level of discretization j = 6 .
Step 1.
Discretize the cost function at level j. Figure 30.
Step 2.
Apply the filtering process at level j 1 to the cost function. Figure 31.
Step 3.
Find an optimal simple solution μ j 1 * for the MK j 1 problem. Figure 32.
Step 4.
Apply the proximity criteria to the support of μ j 1 * . Figure 33.
Step 5.
Refine the solution μ j 1 * to a feasible solution μ ^ 6 applying the criteria (82) and (83). Figure 34.
Step 6.
Permute points of Type II of μ ^ j using (44) to construct feasible solutions μ j σ and chose which has better performance. Figure 35.
The Table 4 summarizes the results obtained.

6. Conclusions and Future Work

Note that with the methodology of [3,4,9], the authors obtain a solution of M K j . For this, they need to resolve a transport problem with 4 j variables. We call this methodology an exhaustive method. For our methodology, in Step 3, we need to resolve a transport problem with 4 j 1 variables, and the other steps of the methodology are methods of classification, ordering and filtering; with 2 j data for classification and ordering and 4 j data for filtering, it is clear that this method requires fewer operations to resolve transport problems.
In summary, we have the following table comparing the results of solving the examples more often used in the literature with our methodology versus the exhaustive method (using all variables).
Cost Function MK 6 MK 6 σ Error
x y 0.166687 0.166687
x 2 y x y 2 0.035141 0.035141 1.52588 × 10 5
4 x 2 y x y 2 0.247726 0.247738 5.31762 × 10 5
( 2 y x 1 ) 2 ( 2 y x ) 2 0.0000600947 0.0000600947
Cost Function MK 7 MK 7 σ Error
x y 0.166672 0.166672
x 2 y x y 2 0.0351524 0.0351524 3.8147 × 10 6
4 x 2 y x y 2 0.247695 0.247698 1.24066 × 10 5
( 2 y x 1 ) 2 ( 2 y x ) 2 0.0000600852 0.0000600852
Cost Function MK 8 MK 8 σ Error
x y 0.166668 0.166668
x 2 y x y 2 0.0351553 0.0351553 9.53674 × 10 7
4 x 2 y x y 2 0.247688 0.247688 3.21631 × 10 6
( 2 y x 1 ) 2 ( 2 y x ) 2
Note that our method always improves the solution of the level j 1 and for some examples give an exact solution; we use Mathematica© and basic computer equipment for programming this methodology, and maybe we can improve the results with software focused on numerical calculus and better computer equipment. It is also important to mention that the methodology presented in this work has some weaknesses. In our computational experiments, we noticed that if we did not start with a sufficient amount of information, then the methodology tended to give very distant results. In other words, if the initial level of discretization was not fine enough, then because the algorithm lowers the resolution level when executed, such loss of information generates poor performance. However, when starting with an adequate level of discretization, experimentally it can be observed that the distribution of the solutions for the discretized problems, as well as the respective optimal values, have stable behavior with a clear trend. The question that arises naturally is: “In practice, what are the parameters that determine good or bad behavior of the algorithm?” Clearly, if the cost function is fixed and we rule out the possible technical problems associated with programming and computing power, the only remaining parameter is the initial refinement level at which the algorithm is going to work—that is, the level j. However, if we reflect more deeply on the reasons why there is a practical threshold beyond which at a certain level of discretization the algorithm has stable behavior, we only have as a possible causes the level of information of the cost function that captures the MR analysis. In other words, if the oscillation of the cost function at a certain level of resolution is well determined by MR analysis, then the algorithm will have good performance.
The approach presented in this paper is far from exhaustive and, on the contrary, opens the possibility for a number of new proposals for approximating solutions to the MK problem. The above is due to the fact that in the work [9], it was proven that discretization of the MK problem can be performed from any MR analysis over R 2 . Therefore, the possibility of implementing other types of discretions remains open. In principle, as we mentioned in the previous paragraph, the most natural thing is to expect better performance if the nature of the cost function and the types of symmetric geometric structures that it induces in space are studied in order to use an MR analysis that fits this information and therefore has more efficient performance.

Author Contributions

Conceptualization, A.S.-N. and C.G.-F.; methodology, A.S.-N., R.R.L.-M. and C.G.-F.; software, C.G.-F.; investigation, A.S.-N. and J.R.A.-P.; writing—original draft preparation, A.S.-N., J.R.A.-P. and C.G.-F.; writing—review and editing, A.S.-N. and R.R.L.-M.; project administration A.S.-N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hernández-Lerma, O.; Lasserre, J.B. Approximation schemes for infinite linear programs. SIAM J. Optim. 1998, 8, 973–988. [Google Scholar] [CrossRef]
  2. González-Hernández, J.; Gabriel-Argüelles, J.R.; Hernández-Lerma, O. On solutions to the mass transfer problem. SIAM J. Optim. 2006, 17, 485–499. [Google Scholar] [CrossRef]
  3. Gabriel-Argüelles, J.R.; González-Hernández, J.; López-Martínez, R.R. Numerical approximations to the mass transfer problem on compact spaces. IMA J. Numer. Anal. 2010, 30, 1121–1136. [Google Scholar] [CrossRef]
  4. Avendaño-Garrido, M.L.; Gabriel-Argüelles, J.R.; Quintana-Torres, L.; Mezura-Montes, E. A metaheuristic for a numerical approximation to the mass transfer problem. Int. J. Appl. Math. Comput. Sci. 2016, 26, 757–766. [Google Scholar] [CrossRef] [Green Version]
  5. Avendaño-Garrido, M.L.; Gabriel-Argüelles, J.R.; Quintana-Torres, L.; Mezura-Montes, E. An Efficient Numerical Approximation for the Monge–Kantorovich Mass Transfer Problem. In Machine Learning, Optimization, and Big Data. MOD 2015; Lecture Notes in Computer Science; Pardalos, P., Pavone, M., Farinella, G., Cutello, V., Eds.; Springer: Cham, Switzerland, 2016; Volume 9432, pp. 233–239. [Google Scholar] [CrossRef]
  6. Gröchenig, K.; Madych, W.R. Multiresolution analysis, Haar bases, and self-similar tilings of Rn. Inst. Electr. Electron. Eng. Trans. Inf. Theory 1992, 38, 556–568. [Google Scholar] [CrossRef] [Green Version]
  7. Liu, Y.; Xu, K.D.; Li, J.; Guo, Y.J.; Zhang, A.; Chen, Q. Millimeter-wave E-plane waveguide bandpass filters based on spoof surface plasmon polaritons. IEEE Trans. Microw. Theory Tech. 2022, 70, 4399–4409. [Google Scholar] [CrossRef]
  8. Liu, K.; Yang, Z.; Wei, W.; Gao, B.; Xin, D.; Sun, C.; Gao, G.; Wu, G. Novel detection approach for thermal defects: Study on its feasibility and application to vehicle cables. High Volt. 2023, 8, 358–367. [Google Scholar] [CrossRef]
  9. Sánchez-Nungaray, A.; González-Flores, C.; López-Martínez, R.R. Multiresolution Analysis Applied to the Monge–Kantorovich Problem. Abstr. Appl. Anal. 2018, 2018, 1764175. [Google Scholar] [CrossRef]
  10. Haker, S.; Tannenbaum, A.; Kikinis, R. Mass Preserving Mappings and Surface Registration; MICCAI 2001, Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; Volume 2208, pp. 120–127. [Google Scholar]
  11. Zhu, L.; Yang, Y.; Haker, S.; Tannenbaum, A. Optimal mass transport for registration and warping. Int. J. Comput. Vis. 2004, 60, 225–240. [Google Scholar] [CrossRef]
  12. Haber, E.; Rehman, T.; Tannenbaum, A. An efficient numerical method for the solution of the L2 optimal mass transfer problem. SIAM J. Sci. Comput. 2010, 32, 7–211. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Xie, X.; Wang, T.; Zhang, W. Existence of solutions for the (p, q)-Laplacian equation with nonlocal Choquard reaction. Appl. Math. Lett. 2023, 135, 108418. [Google Scholar] [CrossRef]
  14. Peng, Z.; Hu, J.; Shi, K.; Luo, R.; Huang, R.; Ghosh, B.K.; Huang, J. A novel optimal bipartite consensus control scheme for unknown multi-agent systems via model-free reinforcement learning. Appl. Math. Comput. 2020, 369, 124821. [Google Scholar] [CrossRef]
  15. Walnut, D.F. An Introduction to Wavelet Analysis; Birkhäuser: Boston, MA, USA, 2004; pp. 115–138. [Google Scholar]
  16. Guo, K.; Labate, D.; Lim, W.Q.; Weiss, G.; Wilson, E. Wavelets with composite dilations and their MRA properties. Appl. Comput. Harmon. Anal. 2006, 20, 202–236. [Google Scholar] [CrossRef] [Green Version]
  17. Krishtal, I.A.; Robinson, B.D.; Weiss, G.L.; Wilson, E.N. Some simple Haar-type wavelets in higher dimensions. J. Geom. Anal. 2007, 17, 87–96. [Google Scholar] [CrossRef] [Green Version]
  18. Bazaraa-Mokhtar, S.; John, J.; Hanif, D. Linear Programming and Network Flows; John Wiley & Sons: Hoboken, NJ, USA, 2010; pp. 513–528. [Google Scholar]
  19. Billingsley, P. Convergence of Probability Measures; John Wiley & Sons: New York, NY, USA, 1999; pp. 27–29. [Google Scholar]
Figure 1. Functions Φ j , k 1 , k 2 , Ψ j , k 1 , k 2 ( 1 ) , Ψ j , k 1 , k 2 ( 2 ) and Ψ j , k 1 , k 2 ( 3 ) .
Figure 1. Functions Φ j , k 1 , k 2 , Ψ j , k 1 , k 2 ( 1 ) , Ψ j , k 1 , k 2 ( 2 ) and Ψ j , k 1 , k 2 ( 3 ) .
Axioms 12 00555 g001
Figure 10. Step 3. We obtain a solution μ 5 * for M K 5 associated with the cost function c 5 given in Figure 6.
Figure 10. Step 3. We obtain a solution μ 5 * for M K 5 associated with the cost function c 5 given in Figure 6.
Axioms 12 00555 g010
Figure 11. Division of the components of support ( μ j 1 * ) into four parts.
Figure 11. Division of the components of support ( μ j 1 * ) into four parts.
Axioms 12 00555 g011
Figure 12. Refinement of grid from level j 1 to j of discretization.
Figure 12. Refinement of grid from level j 1 to j of discretization.
Axioms 12 00555 g012
Figure 13. Supports for refinement of the element S k 1 , k 2 corresponding to the level j:Option I.
Figure 13. Supports for refinement of the element S k 1 , k 2 corresponding to the level j:Option I.
Axioms 12 00555 g013
Figure 14. Supports for refinement of the element S k 1 , k 2 corresponding to the level j: Option II.
Figure 14. Supports for refinement of the element S k 1 , k 2 corresponding to the level j: Option II.
Axioms 12 00555 g014
Figure 18. Discretization of the cost function c ( x , y ) = x 2 y x y 2 at level j = 6 .
Figure 18. Discretization of the cost function c ( x , y ) = x 2 y x y 2 at level j = 6 .
Axioms 12 00555 g018
Figure 19. Filtering of the cost function c at level j 1 = 5 .
Figure 19. Filtering of the cost function c at level j 1 = 5 .
Axioms 12 00555 g019
Figure 20. MK solution for the filtered function c at level j 1 = 5 .
Figure 20. MK solution for the filtered function c at level j 1 = 5 .
Axioms 12 00555 g020
Figure 21. Classification of points of support ( μ 5 * ) into Type I and Type II .
Figure 21. Classification of points of support ( μ 5 * ) into Type I and Type II .
Axioms 12 00555 g021
Figure 22. Solution μ ^ 6 from refinement of μ 5 * .
Figure 22. Solution μ ^ 6 from refinement of μ 5 * .
Axioms 12 00555 g022
Figure 23. Final result μ 6 σ * .
Figure 23. Final result μ 6 σ * .
Axioms 12 00555 g023
Figure 24. Discretization of the cost function c ( x , y ) = x y at level j = 6 .
Figure 24. Discretization of the cost function c ( x , y ) = x y at level j = 6 .
Axioms 12 00555 g024
Figure 25. Apply the filter to the cost function c at level j 1 = 5 .
Figure 25. Apply the filter to the cost function c at level j 1 = 5 .
Axioms 12 00555 g025
Figure 26. Solution of the MK for the function c at level j 1 = 5 .
Figure 26. Solution of the MK for the function c at level j 1 = 5 .
Axioms 12 00555 g026
Figure 27. In this example there are only Type I points .
Figure 27. In this example there are only Type I points .
Axioms 12 00555 g027
Figure 28. Solution μ ^ 6 from refinement of μ 5 * .
Figure 28. Solution μ ^ 6 from refinement of μ 5 * .
Axioms 12 00555 g028
Figure 29. Final result μ 6 σ * .
Figure 29. Final result μ 6 σ * .
Axioms 12 00555 g029
Figure 30. Discretization of cost function c ( x , y ) = ( 2 y x 1 ) 2 ( 2 y x ) 2 for level j = 6 .
Figure 30. Discretization of cost function c ( x , y ) = ( 2 y x 1 ) 2 ( 2 y x ) 2 for level j = 6 .
Axioms 12 00555 g030
Figure 31. Filtered cost function for j = 5 .
Figure 31. Filtered cost function for j = 5 .
Axioms 12 00555 g031
Figure 32. Solution of the M K 5 problem.
Figure 32. Solution of the M K 5 problem.
Axioms 12 00555 g032
Figure 33. In this example, there are only Type II points .
Figure 33. In this example, there are only Type II points .
Axioms 12 00555 g033
Figure 34. Feasible solution μ ^ 6 from refinement of μ 5 * .
Figure 34. Feasible solution μ ^ 6 from refinement of μ 5 * .
Axioms 12 00555 g034
Figure 35. Final result μ 6 σ * .
Figure 35. Final result μ 6 σ * .
Axioms 12 00555 g035
Table 1. Comparison of the values corresponding to MK 5 , MK 6 σ * and MK 6 .
Table 1. Comparison of the values corresponding to MK 5 , MK 6 σ * and MK 6 .
MK 5 MK 6 σ * MK 6
0.24785 0.247738 0.247726
Table 2. Comparison of the values corresponding to MK 5 , MK 6 σ * and MK 6 .
Table 2. Comparison of the values corresponding to MK 5 , MK 6 σ * and MK 6 .
MK 5 MK 6 σ * MK 6
0.0350952 0.035141 0.035141
Table 3. Comparison of the values corresponding to MK 5 , MK 6 σ * and MK 6 .
Table 3. Comparison of the values corresponding to MK 5 , MK 6 σ * and MK 6 .
MK 5 MK 6 σ * MK 6
0.166748 0.166687 0.166687
Table 4. Comparison of values MK 5 , MK 6 σ * and MK 6 .
Table 4. Comparison of values MK 5 , MK 6 σ * and MK 6 .
MK 5 MK 6 σ * MK 6
0.000236571 0.0000600852 0.0000597889
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Acosta-Portilla, J.R.; González-Flores, C.; López-Martínez, R.R.; Sánchez-Nungaray, A. Efficient Method to Solve the Monge–Kantarovich Problem Using Wavelet Analysis. Axioms 2023, 12, 555. https://doi.org/10.3390/axioms12060555

AMA Style

Acosta-Portilla JR, González-Flores C, López-Martínez RR, Sánchez-Nungaray A. Efficient Method to Solve the Monge–Kantarovich Problem Using Wavelet Analysis. Axioms. 2023; 12(6):555. https://doi.org/10.3390/axioms12060555

Chicago/Turabian Style

Acosta-Portilla, Juan Rafael, Carlos González-Flores, Raquiel Rufino López-Martínez, and Armando Sánchez-Nungaray. 2023. "Efficient Method to Solve the Monge–Kantarovich Problem Using Wavelet Analysis" Axioms 12, no. 6: 555. https://doi.org/10.3390/axioms12060555

APA Style

Acosta-Portilla, J. R., González-Flores, C., López-Martínez, R. R., & Sánchez-Nungaray, A. (2023). Efficient Method to Solve the Monge–Kantarovich Problem Using Wavelet Analysis. Axioms, 12(6), 555. https://doi.org/10.3390/axioms12060555

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop