3. Algorithms
In this section we first recall Algorithm
for (
1) from [
28] and then enhance it using different well-known greedy and local search approaches. Let
A be a set and
f a function
. Denote by
the set of minimal elements in
A under the function
f. Let
be an arbitrary assignment between the
i-th and
j-th partition and let
be a
-assignment for the
-partite graph
. We denote by
the unique
k-assignment for
G reconstructed from
and
, i.e.,
For an example see
Figure 1. In the case when
G is a bipartite graph,
contains only one partition and thus does not allow any assignment, for completeness we thus define
. Recall that in the case
an optimal 2-assignment can be found using the Hungarian algorithm [
11]. The result of this algorithm on bipartite graph
G will be denoted by
Hungarian.
Algorithm 1,
. The following algorithm, which we denote by
, for finding near optimal solution for the minimal
k-assignment problem of
k-partite graph
G has been proposed in [
28].
Algorithm 1 [28]. |
- 1:
function (G) - 2:
if then - 3:
return ∅ - 4:
else - 5:
Hungarian() - 6:
return ()
|
In short, the algorithm finds an optimal 2-assignment for , takes the quotient by this assignment and recursively repeats this process. The final result is a complete k-assignment reconstructed from the previously computed -assignments.
Example 1. Let G be a complete 3-partite graph with partitions , and the following (weighted) adjacency matrix
where the entries generate the weight function
;
.
In Algorithm we first compute the optimal assignment between partitions and (Figure 1a), then we compute the quotient graph , presented in Figure 1b, and finally we compute the optimal assignment for this graph (Figure 1c). At the end, we reconstruct the 3-assignment and obtain a solution of weight 44 (Figure 1d). Algorithm 2,
. Algorithm
is greedy in the sense that it takes (lexicographically) the first pair of partitions and merges them according to the best assignment among them. If the order of partitions is changed,
would provide a different result. The idea of
is to consider all the pairs for possible first merge, and take the best result. Note that
is also greedy as it always takes the minimal assignment between the two partitions that are merged. (In a sense, one may say that
is somehow more greedy than
because it looks a bit farther for the seemingly best option in sight.)
Algorithm 2 (, improvement of ). |
- 1:
function (G) - 2:
if then - 3:
return ∅ - 4:
else - 5:
- 6:
return ()
|
Algorithm searches through all possible pairs of partitions, recursively runs on the quotient graph , where M is the optimal 2-assignment on , and among these partitions chooses the one with the best assignment of . If there are more minimal partitions, the algorithm chooses a random partition of minimal weight. Clearly, Algorithm returns a k-assignment that is determined by the 2-assignments that were chosen in the recursive calls of .
Example 2. We take the same graph as in Example 1. In contrast to , Algorithm finds optimal assignments , , and for the induced subgraphs , , and , see Figure 2a–c, respectively. The algorithm continues its search recursively on the bipartite graphs , , and . As Figure 2b shows, we obtain a k-assignment of weight 40. Algorithm 3,
. Observe that Algorithm
is much more time consuming than Algorithm
as it calls the Hungarian algorithm subroutine
times as opposed to only
calls by Algorithm
.
However, note that Algorithm is greedy because it always takes the minimal 2-assignment. As the k-assignment problem (for ) is intractable, a deterministic greedy algorithm can not solve the problem to optimality unless P=NP. We therefore consider an iterative improvement of the solutions by taking a nearly optimal solution (that may be the result of ) to define an initial solution. The neighbors of the given solution are the results of the following procedure: fix one of the 2-assignments, say M, and run Algorithm on . After repeating the process by fixing all 2-assignments, we get a set of new solutions. If at least one of them is an improvement over the previously known best solution, we continue the improvement process. The process stops when there is no better solution among the set of new solutions.
The third algorithm, denoted as Algorithm
, can be considered as a steepest descent algorithm on the set of all
k-assignments, where the next solution is chosen to be a minimal solution in a suitably defined neighborhood.
Algorithm 3 (, steepest descent based on ). |
- 1:
function (G) - 2:
(G) - 3:
repeat - 4:
- 5:
- 6:
- 7:
until - 8:
return
|
Example 3. Taking the same instance as in Examples 1 and 2, Algorithm starts with finding a 3-assignment using and continues by recursively searching graphs and (we can skip , since the initial 3-assignment was already obtained from ). As Figure 3 shows, the best solution of weight 37 is obtained from contracting . If we continue with the iteration, we can see that the solution has stabilized and no further improvements can be made. Algorithm 4,
. Algorithms
and
heavily rely on the Hungarian method and are very time-consuming in comparison to
. Therefore we define another greedy algorithm based on the Hungarian method that is faster. We denote by
the greedy algorithm that takes the minimal 2-assignment
in the
k-partite graph
G, and continues considering the
-partite graph
until only one partition is left.
Algorithm 4 (, Greedy iterative) |
- 1:
function (G) - 2:
if then - 3:
return ∅ - 4:
else - 5:
- 6:
return ()
|
Note that Algorithm has the shortest average running time among algorithms B, C, and D. On the other hand, it is at the same time also the most short-sighted greedy algorithm among the rest of the algorithms, since its choice is based on the quality of the 2-assignment at hand, , and not by looking at the quality of .
Algorithm 5, . The algorithms considered above were in principle deterministic. Except breaking ties randomly in case of more than one minimal choice, all the steps are precisely defined, i.e., the algorithms are deterministic. In the final experiment, we are interested in the effect of randomization.
The next algorithm,
, is an alternative randomized version of
. The main idea is to loop through all 2-assignments in random order and accept the first assignment
that yields a better solution, instead of searching for the minimal 2-assignment
. Thus algorithm
is obtained by changing the main loop of
. As algorithm
accepts the first better solution in the neighborhood of the current solution, it is a kind of iterative improvement algorithm as opposed to
, which is a steepest descent type local search algorithms, because it always looks for the best solution in the neighborhood.
Algorithm 5 () |
- 1:
function (G) - 2:
= (G) - 3:
repeat - 4:
- 5:
for Shuffle() do - 6:
- 7:
if then - 8:
break - 9:
until - 10:
return
|
We denote by the algorithm that takes the best solution of out of n trials (restarts).
Algorithm 6,
. The Algorithm
stops when there is no better solution, even if there are solutions of the same quality (weight) in the neighbourhood of the current solution. These neighborhood solutions might later give improvement, therefore we introduce another variant, called
(
n), which in such case chooses randomly one solution of equal weight and continues, but stops after at most
n steps, since it does not necessarily terminate (e.g., it can iterate between two equally good solutions).
Algorithm 6 (, multistart local search) |
- 1:
function () - 2:
= (G) - 3:
- 4:
for do - 5:
- 6:
if in then - 7:
remove from - 8:
if then - 9:
break - 10:
Choose_Random() - 11:
- 12:
return
|
All of the above algorithms can be easily adapted to solve maximization assignment problems, i.e., to solve (
1) where the objective is maximum. We denote the maximization variants of the algorithms
and
F by
,
,
,
,
, and
, respectively.
Remark 1. Clearly, the algorithms C, D, E, and F always return a feasible assignment because any solution is obtained by a recursive call of B. However, many calls of B and thus many runs of the Hungarian algorithm are expensive in terms of computation time. Therefore, it is an interesting question whether the present local search heuristics may be sped up by considering other neighborhoods, for example applying the idea of variable neighborhood search [29]. 5. Summary and Conclusions
We have introduced Algorithms A, B, C, D, E, and F to approximately solve (
1). The algorithms are all based on extensive use of the Hungarian algorithm and thus arise as natural improvements of Algorithm A from [
28]. Algorithms A, B, C, and D are in principle deterministic, whereas Algorithms E and F incorporate randomization. We implemented the algorithms in Python and evaluated them on three benchmark datasets. Numerical tests show that new algorithms in minimization or maximization variant, in terms of solution quality, outperform A on all of the chosen datasets. Summing up, our study shows that multiple usage of the classic Hungarian method can provide very tight solutions for (
1), in some cases even an optimal solution.
Another important issue when regarding algorithms’ performance is computational time. For smaller instances, E has relatively good speed and on average misses the optimal solution by merely 0.1%, thus, we propose it as our method of choice. Among the deterministic algorithms, our study suggests using Algorithm C. However, we wish to note that when we consider large instances of (
1), both in number of partitions and in size of each partition, we must be very careful how often we will actually run the Hungarian method because many repetitions of the Hungarian method substantially increase computation time. The main goal of the reported research was to explore the potential of the Hungarian algorithm for solving the
k-assignment problem. We have designed several heuristics based on the Hungarian method that have shown to be competitive. While, on one hand, some of our algorithms provide very good (near optimal or even optimal) results in a short time, we also designed two heuristics based on local search [
31,
32,
33]. Local search type heuristics improve the quality of solutions over time and may converge to the optimal solution. This type of heuristics are very useful when the quality of solutions is more important than computational time. We believe that further development of a multistart local search heuristics based on the Hungarian algorithm may lead to a very competitive heuristics for (
1) with hopefully competitive fast convergence to optimal solutions.
In the future, a more comprehensive experimental study of local search based on the Hungarian algorithm may be a very promising avenue of research.