Next Article in Journal
Investigation of the Product of Random Matrices and Related Evolution Models
Next Article in Special Issue
On the Equilibrium in a Queuing System with Retrials and Strategic Arrivals
Previous Article in Journal
Anomaly Detection in the Molecular Structure of Gallium Arsenide Using Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Effective Fine Functions for Inspection—Corruption Games (Evolutionary Approach)

by
Vassili N. Kolokoltsov
1,2,* and
Dmitri V. Vetchinnikov
2
1
Faculty of Computation Mathematics and Cybernetics, Moscow State University, 119991 Moscow, Russia
2
Higher School of Economics, University in Moscow, 109028 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(15), 3429; https://doi.org/10.3390/math11153429
Submission received: 15 July 2023 / Revised: 28 July 2023 / Accepted: 2 August 2023 / Published: 7 August 2023
(This article belongs to the Special Issue Multi-Agent Systems of Competitive and Cooperative Interaction)

Abstract

:
In previous papers of the authors, a generalized evolutionary approach was developed for the analysis of popular inspection and corruption games. Namely, a two-level hierarchy was studied, where a local inspector I of a pool of agents (that may break the law) can be corrupted and is further controlled by the higher authority A. Here, we extend this two-level modeling by answering the following questions: (i) what levels of illegal profit r of violators and what level of bribes α (fraction of illegal profit asked as a bribe from a violator) of an inspector are feasible, that is, realizable in stable equilibria of generalized replicator dynamics; and (ii) what α can be optimal for a corrupted inspector that aims at maximizing the total profit. Concrete settings that we have in mind are illegal logging, the sales of products with substandard quality, and tax evasion.

1. Introduction

The games of inspection and corruption are popular topics in game theory, with numerous applications including the problems of illegal logging, the sales of products with substandard quality, tax evasion, and arms control, see, e.g., monographs [1,2,3,4] and reviews [5,6].
Let us mention specifically the papers, where these problems were considered from an evolutionary point of view. In [7], very simple evolutionary stable strategies were applied to the setting of corruption games. In [8], an evolutionary approach was suggested specifically in the setting of illegal logging, where it was developed in the classical evolutionary setting of pairwise games. Namely, it was assumed that arbitrary pairs of players can hire inspectors that can control their pairwise agreement. In [9], the inspection games were developed specifically in the framework of tax payers stressing the dynamics of information spreading.
In [10], a new generalized evolutionary approach to various classes of models was developed, the pressure and resistance framework, where a large number N of small players is playing against a distinguished major player, or a principle, I, who can influence the payoffs by a parameter b. Small players are supposed to have a finite collection of strategies { 1 , , d } so that the state of the system is given by a vector x = ( x 1 , , x d ) , where x i is a fraction of players adhering to the strategy i. A strategy i yields payoff R i ( x , b ) depending on a state x and strategy b of I. The evolution of the distribution of strategies x develops in the direction of a better response to I by a certain Markov chain. It was shown in [10] that, as N , (i) these Markov processes converge weakly to the deterministic process described by the kinetic equation (or generalized replicator dynamics RD):
x ˙ j = x j [ R j ( x , b ) i x i R i ( x , b ) ] ,
and (ii) the rest points of this dynamics represent approximate Nash equilibria for the stationary game of N players with payoff R i . When payoffs R i are quadratic in x, this kinetic equation reduces to the standard replicator dynamics of the evolutionary game theory (see, e.g., [11]), which were used in [7,8] for corruption games. When d = 2 , denoting by R ( x , b ) and zero the payoffs of the first and second strategies, respectively (only the difference of payoffs is of interest for the dynamics, and thus fixing the second payoff as zero we do not lose generality), system (1) clearly reduces to the single equation
x ˙ = x ( 1 x ) R ( x , b ) ,
where x is the fraction of players adhering to the first strategy.
In [12], this approach was developed for the mechanism design of governmental regulations in a setting of the two-level hierarchy, where a local inspector I can be corrupted and is further controlled by the higher authority A. The main tools for the mechanism design were the fine function for violations f ( r ) (amount of fine for the value r of illegal profit detected) and the fine function F ( r ) for a corrupted inspector (amount of fine for the value r of bribes detected). It was shown that if the fine functions (both for criminal businesses and corrupted inspectors) were proportional to the level of violations (which is a common practice in certain regulations), that is, if f ( r ) = f r , F ( r ) = F r , with some constants f, F, which have the meaning of nominal fines for a unit of illegal profit, then the stable rest points of the dynamics (or Nash equilibria of related games) support the maximal possible level of both corruption and violation. The situation changes if a convex fine is introduced, namely the power-type functions f ( r ) = f r γ , F ( r ) = r γ , with γ > 1 and positive constants f, F. In particular, it was shown that with quadratic fine functions one can effectively control the level of violations, which was referred to as the principle of quadratic fine. The main examples represent illegal logging, selling of substandard quality products, and tax evasion. The necessity to analyze a two-level hierarchy arises from the question ’Who will guard the guardians?’ of the Nobel Prize lecture of L. Hurwicz [13].
In [12], the levels of illegal profit r of violators and the level of bribes α ( 0 , 1 ) (fraction of illegal profit asked by a corrupted inspector as a bribe from a violator for not disclosing the violation) were considered as given exogenously, which was of course only a rough first step for the corresponding real-life modeling. In this paper, we aim at addressing the following crucial questions: (i) what levels of illegal profit r of violators and what level of bribes α of an inspector are feasible, that is, realizable in stable equilibria, and (ii) what α can be optimal for corrupted inspectors that aim at maximizing their total payoff. We present a detailed analysis for the simplest situation only. We also suggest several reasonable extensions, which we aim to analyze in subsequent publications.
The mathematical results obtained in this paper are not complicated. We specifically avoided lengthy expressions whenever possible. Our aim was to give a concise and very clear presentation of the starting point for the program of research on two-level corruption–inspection evolutionary games leading to results, which have clear interpretation and qualitative description that can be explained to a layman. Let us give a brief qualitative description of the results obtained here. We show that, for linear fines, corruption and violation are profitable under upper bounds for the coefficients f, F, which are around the inverse powers of the fraction of agents investigated by the authority A. When these bounds hold, the feasible illegal profit r must be higher than a certain level (small theft is not effective), but has no upper bound (rob as much as possible). The possible profitable values of the bribe fraction α lie in a certain interval depending on the parameters of the model. The effective profit of corrupted inspectors turns out to be a convex function of α, and the maximal profit is attained on the unique value α. This value lies usually somewhere in the middle of the feasible interval (a corrupted inspector should not be too greedy when aiming at sustainable bribe levels). In the case of convex fines, we have similar upper bounds for the coefficients f, F yielding necessary conditions of corruption. However, when these conditions hold, possible illegal profit r of violators is bounded from above and below. For the quadratic fines γ = 2 , possible r lie in a reasonably thin layer around the inverses of nominal fine values F and f, which is quite intuitive. For γ < 2 , these bounds increase with decreasing γ and tend to , as γ 1 . Optimal values of α usually lie in the middle of the feasible interval, like in the case of linear fines.
Remark 1. 
Of course, quadratic (or other power-type) fines represent only mathematical idealization. In practice, they can be realized approximately by piecewise linear approximations.
The further content of the paper is as follows. In the next section, we describe our model, also improving the presentation and the assumptions of paper [12] and indicating its certain shortcomings. The next two sections give our main results on feasible pairs ( r , α ) and optimal α in the case of linear and convex fine functions. In Section 5, we discuss several extensions of the simple model considered here. The main point is that, under the present assumptions, the domain of corruption is independent of the dynamic variable of the kinetic equations and we obtain just two separate equations: in the domain of corruption and otherwise. In more advanced modeling, the domain of corruption becomes dependent on the dynamic variable (can be an interval below a certain effective frontier of corruption v c , or above it, depending on details of models and asymptotic behavior of the parameters) leading to a switching dynamics with discontinuous coefficients. Only for these switching cases does the principle of quadratic fine (as stated in [12]) reveals its full meaning. Rough numerical illustrations are given in Section 6. In Section 7, some conclusions are drawn.

2. Main Model

We assume that there is a large number M of firms or agents, having two strategies: violate the law and refrain from violating. Violation leads to some illegal gain r.
Remark 2. 
In [12], we looked at a more complicated model with different levels of r given exogenously. Here, we are looking for levels r that are feasible in an equilibrium.
We denote by v the fraction of violating agents so that v M is their total number, and the average level of violation is v r .
A local inspector I is given a budget that is to be used to check the agents so that the violation of each violator is detected with some probability p that may depend on v. It is natural to assume that the budget for local crime detection must increase when the crime level increases. The simplest dependence of this kind is linear:
p = p 0 + β v
with some non-negative constants p 0 and β such that p 0 + β 1 , which we shall adopt.
If an inspector is honest (strategy H), then a detected violator is deprived of the illegal gain r and is fined, the fine (paid to the government) being a fixed function f ( r ) , referred to as the fine function, which represents the main mechanism design of the government. It was suggested in [12] to use power fine function f r γ with some f > 0 and γ 1 . It was found that using γ > 1 (and better even γ = 2 ) is essentially more effective than γ = 1 , as it allows ultimately to limit the scales of illegal gains in an equilibrium. We shall make this observation more precise here. If an inspector is corrupted (strategy C), she just demands for herself the part α r of the illegal gain, with some constant α ( 0 , 1 ) .
Additionally, there is a central authority A, which aims at fighting corruption and illegal gains of violators by carrying out independent investigations. However, A invests much less budget in local inquiries, so that the probability of detection of a violation is given by some δ, which is smaller than p 0 .
Remark 3. 
One can interpret the “investigation power” p of inspectors in an intuitively similar way, denoting by p the fraction of all agents that are randomly chosen from M and thoroughly checked (with violation detected with probability 1). The same remark concerns the parameter δ. However, formally, this interpretation would lead to different distributions, one would have to work with hypergeometric distributions instead of much simpler binomial distributions that are sufficient under our present interpretation of the parameters p , δ .
Remark 4. 
In [12], it was assumed that A makes just one random check (corresponding to δ = 1 / M in our present notations), but the parameter p was misleadingly used with a different meaning in some places leading to a slight distortion of the model.
If A detects a violating agent that was not detected by the local inspector I, then the agent just pays the fine f r γ . If A detects a violator that was previously detected by I and paid the bribe α r to I, then the detected agent pays the prescribed fine f r γ , but I is also punished. This punishment is assumed to take the standard form used in models of corruption games (see [5,6]): I has to return the illegal profit α r and pays the fine, given by some prescribed function of α r , the fine function for corruption, which is the second major mechanism design of the government. This function can be the same as the fine function for usual violators but can be different. We shall assume that it has the same power type F ( α r ) γ with possibly another constant F. Moreover, I is supposed to have some standard salary w, but if detected in corruption, she obtains a lower level job with some reserve salary w 0 and without further possibility for corruption.
Therefore, for a violator, the probability to be detected by an inspector is p, and the probability to be detected by the authority A is δ. These events are assumed independent. Therefore, the average payoff for a violator under an honest inspector equals
V H = [ p + δ ( 1 p ) ] f r γ + ( 1 p ) ( 1 δ ) r ,
and under a corrupted inspector
V C = p ( 1 δ ) ( 1 α ) r + p δ ( α r f r γ ) + ( 1 p ) ( 1 δ ) r ( 1 p ) δ f r γ
= r ( 1 δ p α ) δ f r γ .
For these payoffs, we can write down the generalized replicator dynamics or kinetic Equation (2), which is quite intuitive: the fraction of violators increases when a violation is profitable. Namely, we have
v ˙ = v ( 1 v ) V C = v ( 1 v ) [ r ( 1 δ p α ) δ f r γ ] = v ( 1 v ) [ r ( 1 δ ( p 0 + β v ) α ) δ f r γ ] ,
under a corrupted inspector and as
v ˙ = v ( 1 v ) V H = v ( 1 v ) [ r ( 1 p ) ( 1 δ ) δ f r γ ( p + δ ( 1 p ) ) ]
= v ( 1 v ) [ r ( 1 ( p 0 + β v ) ) ( 1 δ ) δ f r γ ( ( p 0 + β v ) + δ ( 1 ( p 0 + β v ) ) ) ]
under an honest inspector.
Hence, the candidates for (nontrivial) rest points of the dynamics are obtained by solving the equations V C = 0 and V H = 0 , respectively. They are equal
v r c = 1 δ ( 1 + f r γ 1 ) p 0 α β α
and
v r h = 1 ( 1 + f r γ 1 ) [ δ + ( 1 δ ) p 0 ] ( 1 δ ) ( 1 + f r γ 1 ) β ,
respectively. These rest points are seen to be always stable. We call them candidates for rest points, as they have to belong to the interval (0, 1) in order to actually be such rest points.
Remark 5. 
It is shown in [10] that the rest points of such kinetic equations represent an ϵ-Nash equilibria for the corresponding game of M players, independently of any dynamics.
Let us look at the average payoffs of an inspector I. If she is honest, then it is just w. If she were corrupted, but there were no A, the average payoff would be w + α r p M v .
With the presence of A the additional (to w) payoff to I consists of three terms: possible change of w to w 0 , possible fine and not detected bribes. Thus, it is equal to
α r ( average number of bribes minus average number of detected cases )
F ( α r ) γ ( average number of detected cases ) ( w w 0 ) ( probability a bribe is detected ) .
All these probabilities are obtained from the standard binomial distribution. The average number of bribes and the average number of detected bribes are δ p M and δ p v M , respectively. The probability q d of the corruption being detected (at least one detected bribe) equals
q d = 1 ( 1 p δ ) v M .
One may use the first-order approximation p δ v M to this probability. This is an upper bound for q d , which is reasonable whenever p δ v M is much less than 1. In this approximation, the additional payoff of a corrupted agent equals
P c o r r = α r p ( 1 δ ) v M ( w w 0 + F ( α r ) γ ) p δ v M .
The corruption is profitable for inspectors I if this additional payoff is positive, that is, inside the corruption domain  M c (that does not depend on p and v) given by the condition
α r ( 1 δ ) δ ( w w 0 + F ( α r ) γ ) .
It is seen directly that if γ = 1 (proportional fine), then the domain is empty whenever δ ( F + 1 ) > 1 . And otherwise the corruption domain is given by the condition
α r w w 0 1 δ ( 1 + F ) ,
yielding no upper bound for an illegal payoff. If γ > 1 , the corruption domain is bounded for any other parameters!

3. Results for Linear Fines

For γ = 1 the formulas above simplify essentially. We obtain
v r c = 1 δ ( 1 + f ) p 0 α β α
v r h = 1 ( 1 + f ) [ δ + ( 1 δ ) p 0 ] ( 1 + f ) β .
The key point of the linear fine is that these values do not depend on r. Consequently, whenever unbounded r are allowed in equilibrium, their average values r v r c or r v r h are also unbounded.
Theorem 1. 
The necessary conditions for corruption in a stable equilibrium are
δ ( 1 + f ) < 1 , δ ( 1 + F ) < 1 .
If they hold, the necessary and sufficient condition is written down as
w w 0 1 δ ( 1 + F ) 1 r < α < min 1 , 1 δ ( 1 + f ) p 0 .
This interval of the values of α is not empty if
r > max w w 0 1 δ ( 1 + F ) , ( w w 0 ) p 0 [ 1 δ ( 1 + F ) ] [ 1 δ ( 1 + f ) ] .
Proof. 
For the existence of corruption in a stable equilibrium the domain of corruption must be nonempty and v r c must be positive for at least some α yielding (15). Moreover, the fraction α ( 0 , 1 ) must belong to the domain of corruption and ensure that v r c > 0 implying (16). Evidently, interval (16) is not empty exactly when (17) holds. □
If the necessary conditions (15) for corruption are fulfilled (which holds for sufficiently small δ), then the average level of illegal profit in equilibrium (per violating agent) is r min ( 1 , v r c ) , which is unbounded.
If δ ( 1 + f ) > 1 , then v r h < 0 , and therefore there is neither corruption nor illegal profit in equilibrium. If δ ( 1 + f ) < 1 < δ ( 1 + F ) , then there is no corruption, but illegal profit may exist in equilibrium whenever
( 1 + f ) [ δ + ( 1 δ ) p 0 ] < 1 ,
in which case its average is r min ( 1 , v r h ) .
Let us look at the dependence of the additional payoff of inspectors I on α. Its behavior comes out of two effects: By (10), this payoff in the equilibrium is given by the expression
P c o r r = M α r p ( 1 δ ) v r c M ( w w 0 + F α r ) p δ v r c
= M v r c ( p 0 + β v r c ) [ α r ( 1 δ ( 1 + F ) ) δ ( w w 0 ) ] .
The behavior of this payoff comes out as the result of two opposite effects: it is increasing in α for every v, but v c r is a decreasing function of α.
Theorem 2. 
The maximum of payoff (18) on interval (16) is attained on the point α 0 = min ( 1 , 1 / ξ 0 ) , where
ξ 0 = 1 2 p 0 1 δ ( 1 + f ) + r ( 1 δ ( 1 + F ) ) δ ( w w 0 ) .
Proof. 
Substituting the value of v r c yields p 0 + β v r c = ( 1 ( 1 + δ ) f ) / α , and therefore
P c o r r = M β [ 1 ( 1 + δ ) f ] 1 δ ( 1 + f ) p 0 α α 2 [ α r ( 1 δ ( 1 + F ) ) δ ( w w 0 ) ] .
Payoff (20) is a quadratic function of ξ = 1 / α with roots at the points
ξ 1 = p 0 1 δ ( 1 + f ) , ξ 2 = r ( 1 δ ( 1 + F ) ) δ ( w w 0 ) .
Hence, its maximum is attained at the point ξ 0 = ( ξ 1 + ξ 2 ) / 2 , as claimed. □
This result shows that, in order to maximize their payoff, inspector I must not aim (unless 1 is the upper bound for interval (16)) at the highest bribe α, but has to choose it reasonably in the middle.

4. Results for Quadratic and More General Convex Fines

In case γ = 2 , Formulas (7) and (8) yield
v r c = 1 δ ( 1 + f r ) p 0 α β α
and
v r h = 1 ( 1 + f r ) [ δ + ( 1 δ ) p 0 ] ( 1 + f r ) β .
The domain of corruption is given by the quadratic inequality
δ F ( α r ) 2 ( 1 δ ) α r + δ ( w w 0 ) 0 .
Hence, a necessary condition for the possibility of corruption is that the discriminant of the corresponding quadratic equation is positive:
4 F δ 2 ( w w 0 ) 1 δ .
If this holds, then (23) is fulfilled whenever
R 1 < α r < R 2
with
R 1 , 2 = 1 δ 1 δ 4 F δ 2 ( w w 0 ) 2 F δ ,
the roots of the corresponding quadratic equation.
On the other hand, the condition v r c > 0 yields the necessary conditions
r < 1 δ δ f and α < 1 δ ( 1 + f r ) p 0 .
Theorem 3. 
The necessary conditions for the existence of corruption in a stable equilibrium are the inequalities
4 F δ 2 ( w w 0 ) 1 δ ,
and
( 1 δ ) 2 4 δ f p 0 R 1 .
If conditions (27) and (28) hold, then the corruption in equilibrium is possible for r satisfying
R 1 < r < 1 δ δ f and R 1 r < 1 δ ( 1 + f r ) p 0 ,
and α lying in the interval
R 1 r < α < min 1 , R 2 r , 1 δ ( 1 + f r ) p 0 .
Proof. 
Condition (30) follows from (25) and (26). Condition (29) is an evident necessary and sufficient condition for interval (30) to be nonempty.
The second condition (29) rewrites as
δ f r 2 ( 1 δ ) r + p 0 R 1 < 0 .
Numbers r satisfying the last inequality exist whenever there exist real roots r 1 < r 2 of the corresponding quadratic equation, which in turn occurs whenever the discriminant is positive, that is, when (28) holds. □
Remark 6. 
(i) In the theorem, we did not specify precisely the conditions for the set given by (29) to be nonempty. This is the condition for the interval given by the first condition of (29) and ( r 1 , r 2 ) to have a nonempty intersection. Explicit formulas are not much revealing here. However, some simple sufficient conditions can be written. For instance, under a natural assumption ϵ f ( 1 δ ) < 2 F ( 1 δ p 0 ) , interval (29) is not empty and it contains the interval ( R 1 , min { R 2 , ( 1 δ p 0 ) / δ f } ) . (ii) Condition (29) is a bit cumbersome and not intuitive. The detailed analysis allows one to write down some simple sufficient conditions. For instance, it holds if either 2 f p 0 < F (punishment for bribes is not essentially lower than for just illegal profit), or
8 f p 0 δ 2 ( w w 0 ) < ( 1 δ ) 2 .
The main point of Theorem 3 is that conditions (27) and (28) yield very restrictive bounds for the parameters of the model allowing for the corruption in a stable equilibrium. And even when they are fulfilled, the possible levels of illegal profit r are bounded (unlike the case of linear fines) from above and from below. In fact, they belong to a reasonably thin layer around the values 1 / δ F and 1 / δ f . Roughly speaking, the illegal profit r in equilibrium is proportional to the average fractions of the inverse of nominal fines (per unit of illegal profit) inspected by A, which is quite intuitive.
The analog of Theorem 2 can be obtained, but it involves more complicated (and not much revealing) expressions. Qualitative behavior is similar, but it depends on rather nontrivial conditions on the main parameters, which distinguishes the cases when it is profitable for inspectors I to choose the highest possible value of α or to stay somewhere in the middle of interval (30).
The results for γ ( 1 , 2 ] are similar (with more cumbersome formulas). The domain of corruption is given by the inequality
δ F ( α r ) γ ( 1 δ ) α r + δ ( w w 0 ) 0 .
The extremal point of the convex function on the l.h.s. of this inequality is
( α r ) c = 1 δ γ δ F 1 / ( γ 1 ) .
Hence, the set of α satisfying (31) is not empty whenever
δ F ( α r ) c γ ( 1 δ ) ( α r ) c + δ ( w w 0 ) 0 ,
or
F 1 / ( γ 1 ) ( w w 0 ) ( γ 1 ) 1 δ δ γ γ / ( γ 1 ) .
This is the necessary condition for the existence of corruption in a stable equilibrium that extends (24) to arbitrary γ.
The condition v r c > 0 yields
r γ 1 1 δ δ f , α < 1 δ ( 1 + f r γ 1 ) p 0 .
All other conditions generalize accordingly. The qualitative result is that, under the necessary conditions of corruption, the illegal profit r in equilibrium is roughly around the inverses 1 / δ f and 1 / δ F of nominal (per unit of illegal profit) fines inspected by A, but now (unlike the case γ = 2 ) taken in power 1 / ( γ 1 ) , which is more than 1 for γ < 2 and tends to for γ 1 .

5. Variants of the Model

The general statement about the noneffectiveness of linear fines seems to be sufficiently robust under the variants of the model. However, details may differ essentially under various modifications of the assumptions of the model. Above, we looked carefully at the simplest possible setting. Let us point out some quite reasonable modifications giving brief comments on essential differences and leaving details for further work.
1. Within our present assumptions, the corruption domain (11) does not depend on v, and thus we have just two separate generalized replicator dynamics (5) and (6). One can argue that if the “investigation power” p of inspectors is supposed to increase in v (as we assume in (3)), it would be natural to assume the “investigation power” δ of authority A increasing in the same way. Then the corruption domain (11) would depend on v: it would be given by the interval v < v c with some effective frontier of corruption v c . Therefore, corruption would be effective for not very large v, reflecting the intuition that, for larger v, the probability of discovering corruption would be high enough to overweight the benefits of bribes. In the case of a v-dependent domain of corruption, the analysis of stable equilibria becomes more involved, as the replicator dynamics are not separated, but becomes switching at the effective frontier of corruption v c . Such a switching was analyzed in [12], though with some minor errors in modeling. We expect that only for v-dependent δ the special effectiveness of γ = 2 (as compared to other γ > 1 ) is revealed (see more on this point below in variant 3).
2. When p δ v M 1 , the approximation p δ v M to p d is obviously not appropriate. For large v M on can use the Poisson approximation yielding
q d 1 e p δ v M .
Therefore, up to exponentially small terms, p d 1 . This means that corruption would be most probably detected, but an inspector I may not bother much about it, due to the immense profit from the corruption. In this approximation, the corruption domain is given by the condition
α r ( 1 δ ) w w 0 p δ v M + δ F ( α r ) γ .
Here, the corruption domain is again v-dependent, but is given by an interval of the upper values v > v c , with some effective frontier of corruption v c . In this case, the corruption is profitable for larger v, unlike the previous case. This behavior reflects the intuition that, if I does not care essentially about being detected and fined, I just wishes to suck out as much as possible bribes from violating agents.
3. If v M is large, but detection is not desirable for an inspector, I may introduce another tool of control to reduce this probability. Namely, I can choose to take bribes only for a fraction ω of detected violators. If ω is small enough to make ω p δ v M much smaller than 1, approximation ω p δ v M is valid for the probability of detection so that the additional illegal payoff to I equals
ω α r p ( 1 δ ) v M ( w w 0 + F ( α r ) γ ) ω p δ v M ,
and the corruption domain is again given by (11). However, introduction of ω changes V C and v r c . Namely, in this case, (4) changes to
V C = r [ 1 δ p ( ω α + ( 1 δ ) ( 1 ω ) ) ] f r γ [ δ + ( 1 δ ) ( 1 ω ) p ] .
This leads to essential modifications of the analysis because the rest point v r c obtains the term of order f r γ in the denominator (unlike our (21)). This means that the average illegal payoff r v c r in equilibrium would be of order r 2 γ for large r, which is universally bounded for γ = 2 (not for γ < 2 ) expressing the principle of quadratic fines, as declared in [12].

6. Numeric Examples

In this section, we give some numeric examples with two objectives: to demonstrate (i) the domain of applicability of our main approximation p δ v M < 1 and (ii) the linear—quadratic fine function dichotomy, that is, why quadratic fine yields more effective control over the illegal profit of agents and corruption of inspectors. The game parameters proposed in the examples are socially averaged and applicable to the real situations.
In our main model, see (10), we have taken approximation q d p δ v M for the probability of a corrupted inspector to be detected. This approximation is reasonable for small p δ v M .
To illustrate this approximation further, let us take
δ = 0.1 , p = 0.4 , v = 0.2 .
If M = 20 , then p δ v M = 0.16 and q d = 0.15 , which seems to be a reasonable approximation (6% error). If M = 100 , then the approximation p δ v M to q d equals 0.8. This is close to 1 and differs essentially from the exact value q d = 0.558 .
When M increases, we expect approximation (33) to become more appropriate. In fact, when M = 200 , q d = 0.804 , while the Poisson approximation is 0.798. For this range of parameters, one can employ the second variant of the model, sketched in the previous section.
Let us now illustrate our results with some rough estimates. Let us start with linear fine functions, with f = 1 and F = 10 .
Remark 7. 
These numbers are not taken totally out of the blue. In Russia, tax avoidance is fined usually by linear fine functions with f = 0.4 or f = 0.2 and in the UK by linear fine functions with f having value up to 2 (with variations depending on various circumstances). The fines for corruption vary in much broader intervals. Under certain circumstances, Russian law uses linear fine functions with F between 10 and 20.
By Theorem 1, to avoid illegal profit and corruption with such tools of mechanism design, authority A has to have efficiency δ higher than about 10%. If it is less than this value, equilibrium will support arbitrarily large illegal profit r, as long as it is essentially higher as some multiple of w w 0 (the salary of inspectors I). Even if δ 1 / 2 (efficiency of about 50%) stable equilibria do not support corruption of inspectors, they still do support unbounded illegal profit of agents.
On the other hand, if one would use the quadratic fine functions with, say, the same f and F, one would obtain, by Theorem 3, the (roughly) condition 40 ( w w 0 ) δ 2 > 1 ensuring the absence of corruption in equilibria. But if this (restrictive) bound does not hold, corruption and illegal profit r will be possible, but in a controllable way. Namely, r will be bounded at least by 1 / δ , and will actually belong to a rather restrictive domain. Exact intervals for r and optimal values of bribes α can be obtained from Theorems 1 and 3.

7. Conclusions and Further Perspectives

In Section 5, we presented several natural (more nontrivial) extensions for the model indicating perspectives for meaningful further analysis and pointing out a rather rich variety of possible new quantitative results supported by a clear real-life intuition. Also, a deeper analysis is required for a systematic discussion of heterogeneous violators that may choose different levels r of illegal profit.
As one of the limitations of our models, we can mention an assumption that we allow taking a bribe in an amount strictly less than the illegal profit itself. This restriction holds true in many cases, but it is not universal. One may well imagine that an agent, in order to avoid criminal liability, can be ready, under pressure from the corrupted inspector, to give amounts that are greater than the discovered illegal income.
Another principle limitation of the model is the assumption of predetermined behavior of homogeneous small players, which is the hallmark of the evolutionary approach and its extensions. In the alternative framework of the popular mean field games, small players become rational optimizers, see, e.g., [3,14] and references therein for general background on mean field games. The application of these ideas to corruption games was developed in [3,15], though not yet for a two-level hierarchy.

Author Contributions

Methodology, V.N.K.; Investigation, V.N.K. and D.V.V.; Writing—original draft, V.N.K. and D.V.V.; Writing—review & editing, D.V.V. All authors have read and agreed to the published version of the manuscript.

Funding

The article was prepared in the framework of a research grant funded by the Ministry of Science and Higher Education of the Russian Federation (grant ID: 075-15-2020-928).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Avenhaus, R.; Krieger, T. Inspection Games over Time. Fundamental Models and Approaches; Forschungszentrum Jülich GmbH Zentralbibliothek: Verlag Jülich, Germany, 2020. [Google Scholar]
  2. Rose-Ackerman, S. Corruption and Government: Causes, Consequences and Reforms; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  3. Kolokoltsov, V.N.; Malafeyev, O.A. Many Agent Games in Socio-Economic Systems: Corruption, Inspection, Coalition Building, Network Growth, Security; Springer Series in Operations Research and Financial Engineering; Springer Nature: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  4. Vasin, A.A. Noncooperative Games in Nature and Society; MAKS Press: Moscow, Russia, 2005. (In Russian) [Google Scholar]
  5. Aidt, T.S. Economic Analysis of corruption: A survey. Econ. J. 2009, 113, F632–F652. [Google Scholar]
  6. Jain, A.K. Corruption: A review. J. Econ. Surv. 2001, 15, 71–121. [Google Scholar]
  7. Mishra, A. Persistence of Corruption: Some Theoretical Perspectives. World Dev. 2006, 34, 349–358. [Google Scholar] [CrossRef]
  8. Lee, J.-H.; Sigmund, K.; Dieckmann, U.; Iwasa, Y. Games of corruption: How to suppress illegal logging. J. Theor. Biol. 2015, 367, 1–13. [Google Scholar] [PubMed] [Green Version]
  9. Gubar, E.; Kumacheva, S.; Zhitkova, E.; Kurnosykh, Z.; Skovorodina, T. Modelling of Information Spreading in the Population of Taxpayers: Evolutionary Approach. Contrib. Game Theory Manag. 2017, 10, 100–128. [Google Scholar]
  10. Kolokoltsov, V. The evolutionary game of pressure (or interference), resistance and collaboration. MOR Math. Oper. Res. 2017, 42, 915–944. [Google Scholar] [CrossRef] [Green Version]
  11. Webb, J. Game Theory. Decisions, Interactions and Evolutions; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  12. Kolokoltsov, V.N. Inspection—corruption game of illegal logging and other violations: Generalized evolutionary approach. Mathematics 2021, 9, 1619. [Google Scholar] [CrossRef]
  13. Hurwicz, L. But Who Will Guard the Guardians? Nobel Prize Lecture 2007. Available online: www.nobelprize.org (accessed on 1 August 2023).
  14. Bensoussan, A.; Frehse, J.; Yam, P. Mean Field Games and Mean Field Type Control Theory; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  15. Katsikas, S.; Kolokoltsov, V.; Yang, W. Evolutionary inspection and corruption games. Games 2016, 7, 31. [Google Scholar] [CrossRef] [Green Version]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kolokoltsov, V.N.; Vetchinnikov, D.V. On Effective Fine Functions for Inspection—Corruption Games (Evolutionary Approach). Mathematics 2023, 11, 3429. https://doi.org/10.3390/math11153429

AMA Style

Kolokoltsov VN, Vetchinnikov DV. On Effective Fine Functions for Inspection—Corruption Games (Evolutionary Approach). Mathematics. 2023; 11(15):3429. https://doi.org/10.3390/math11153429

Chicago/Turabian Style

Kolokoltsov, Vassili N., and Dmitri V. Vetchinnikov. 2023. "On Effective Fine Functions for Inspection—Corruption Games (Evolutionary Approach)" Mathematics 11, no. 15: 3429. https://doi.org/10.3390/math11153429

APA Style

Kolokoltsov, V. N., & Vetchinnikov, D. V. (2023). On Effective Fine Functions for Inspection—Corruption Games (Evolutionary Approach). Mathematics, 11(15), 3429. https://doi.org/10.3390/math11153429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop