Next Article in Journal
Sustaining Algeria’s Retirement System in the Population Aging Context: Could a Contribution Cap Strategy Work?
Previous Article in Journal
Cryptocurrencies’ Impact on Accounting: Bibliometric Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Expected Utility Optimization with Convolutional Stochastically Ordered Returns

by
Romain Gauchon
1,† and
Karim Barigou
2,*,†
1
ISFA, Université Lyon 1, UCBL, LSAF EA2429, F-69007 Lyon, France
2
École d’actuariat, Université Laval, 2425, Rue de l’Agriculture, Québec, QC G1V 0A6, Canada
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Risks 2024, 12(6), 95; https://doi.org/10.3390/risks12060095
Submission received: 7 May 2024 / Revised: 5 June 2024 / Accepted: 11 June 2024 / Published: 14 June 2024

Abstract

:
Expected utility theory is critical for modeling rational decision making under uncertainty, guiding economic agents as they seek to optimize outcomes. Traditional methods often require restrictive assumptions about underlying stochastic processes, limiting their applicability. This paper expands the theoretical framework by considering investment returns modeled by a stochastically ordered family of random variables under the convolution order, including Poisson, Gamma, and exponential distributions. Utilizing fractional calculus, we derive explicit, closed-form expressions for the derivatives of expected utility for various utility functions, significantly broadening the potential for analytical and computational applications. We apply these theoretical advancements to a case study involving the optimal production strategies of competitive firms, demonstrating the practical implications of our findings in economic decision making.

1. Introduction

In numerous economic settings, agents face a fundamental challenge: optimizing expected utility within an uncertain environment by adjusting control parameters. This decision-making process demands a careful evaluation of potential outcomes and their impacts.
This issue is prevalent across various economic domains, including firms operating under uncertainty (Klemperer and Meyer 1989; Sandmo 1971), risk management (Courbage 2001; Ehrlich and Becker 1972; Lee 1998), labor market dynamics (Altonji 1993), and workforce development (Loewenstein and Spletzer 1998). Traditional models often assume that agents have complete foresight regarding the consequences of their decisions, an assumption that seldom holds true in practice.
In reality, decisions are frequently made under conditions of significant uncertainty. Agents must often rely on estimates with inherent inaccuracies, leading to potential discrepancies between expected and actual outcomes. For instance, investments in employee training intended to enhance productivity can result in variable returns due to unpredictable factors like equipment failures or health emergencies.
These diverse models can be unified under a broader framework. We consider an agent guided by a utility function u, where decisions hinge on a control parameter λ . This parameter may represent factors like prevention investments or training expenditures. The parameter’s selection induces a deterministic effect C ( λ ) , which can be perceived as the cost associated with λ . Often, C ( λ ) takes the form of C ˜ ( λ ) W , with W denoting initial wealth and C ˜ representing a real function of λ . External events introduce randomness in the expected return of investment through a family of random variables ( X λ ) λ > 0 , controlled by the parameter λ . The agent is then concerned with evaluating the following:
u ( X λ C ( λ ) ) .
The manner in which the agent evaluates the expression (1) has been explored in various theoretical frameworks. One of the simplest approaches is the expected utility theory introduced by Morgenstern and Von Neumann (1953)1. In this model, agents aim to find the following:
λ * = argmax λ E [ u ( X λ C ( λ ) ) ] .
In order to address this challenge, most previous works specify the precise relationship between X λ and λ , even though such models may lack generality and flexibility. In a study, Bensalem et al. (2020) proposed a generalized model where variations of X λ are controlled, assuming only that λ 1 < λ 2 implies that X λ 1 is stochastically dominated by X λ 2 in terms of the convolution order. However, the authors operated within the framework of the dual theory introduced by Yaari (1987), limiting the utility function to u ( x ) = x .
In this paper, we explore more complex utility functions that involve advanced mathematical tools. As demonstrated in Section 3, we utilize fractional calculus to derive the derivative of expected utility in the case of power utility functions. Among the various definitions of fractional calculus available (see Yang 2019), we adopt the Weyl (or Liouville–Weyl) approach, popularized by Cressie and Borkent (1986) to analyze moments of random variables of any order.
We assume that the family of random variables X λ is ordered with respect to the convolution order. The main contribution of this paper lies in leveraging this assumption to derive two novel expressions for the derivative of E [ u ( X λ C ( λ ) ) ] with respect to λ , applicable when u is either an entire function or a power utility function. To provide a practical example, we apply these formulas to concrete economic problems, elucidating the associated economic intuition. Furthermore, we present several new mathematical results that establish a close connection between the convolution order and cumulants.
The remainder of this article is structured as follows: In Section 2, we introduce the convolution order and its pertinent properties. Section 3.1 offers essential preliminary results, highlighting the natural relationship between cumulants and the convolution order. In Section 3.2, we derive two formulations for the derivative of expected utility. Section 4 applies these formulations to the problem faced by industrial firms seeking to maximize their profit (competitive firms under uncertainty, as formulated by Sandmo (1971)). Section 5 provides a discussion of the results.

2. The Convolution Order

In this section, we present the key results from the literature on the convolution order.
Let X and Y be two random variables. Their cumulative distribution functions are denoted by F X and F Y , respectively, and their Laplace transforms are denoted by L X and L Y (for all s > 0 , we have L X ( s ) = E ( e s X ) ). Given that we are dealing with random variables, the statement “variable Y is smaller than variable X” can have different interpretations. As a result, numerous partial orders have been introduced to facilitate comparisons between random variables.
The standard stochastic order is known as the first stochastic order. We say that the random variable Y is smaller than X in the first stochastic order, denoted as Y s t X , if and only if, for all x R , F X ( x ) F Y ( x ) . An insightful way to understand this order is from an economic perspective: Y s t X holds if, for all increasing functions u, E ( u ( Y ) ) E ( u ( X ) ) . Consequently, according to expected utility theory, any rational agent will prefer the random variable X to Y, regardless of their level of risk aversion (Denuit et al. 2006).
In a deterministic setting, another intuitive way to view y as smaller than x is by considering the difference between the two quantities. If x y is positive, then y is smaller than x. In other words, there exists a non-negative value z such that x = y + z . In a stochastic context, such difference approach can be captured with a new order: the stochastic convolution order.
Definition 1 
(Convolution order). A random variable Y is said to be smaller than X in the convolution order, denoted as Y c o n v X , if there exists a non-negative random variable Z, independent of Y, such that
X = d Y + Z ,
where = d indicates that, for all x R , F X ( x ) = F Y + Z ( x ) .
This order was originally introduced by Shaked and Suarez-Llorens (2003). It implies that X is so much more preferable than Y that you need to add a positive payoff to Y in order for an agent to choose Y over X. While this order seems intuitive, it has not received extensive attention in the literature. Examples of articles discussing the convolution order include Zhang (2018) and Castano-Martínez et al. (2013).
Several common distributions can be ordered using the stochastic convolution order.
Lemma 1. 
The following properties hold:
  • If k R + , then X c o n v X + k .
  • If X follows an exponential distribution with the parameter λ 1 and Y follows an exponential distribution with the parameter λ 2 λ 1 , then Y c o n v X .
  • If X follows a Gamma distribution with the parameters α 1 , β and Y follows a Gamma distribution with the parameters α 2 α 1 , β , then Y c o n v X .
  • If X follows a Poisson distribution with the parameter λ 1 and Y follows a Poisson distribution with the parameter λ 2 λ 1 , then Y c o n v X .
Proof. 
The first point is immediate. The cases of Poisson and Gamma distributions follow from the observation that the independent sum of Poisson or Gamma distributions follows the same distribution with the sum parameter. Therefore, we only prove the case of the exponential distribution, which will be relevant later. Let x R + . We have the following:
P ( X x ) = 1 e λ 1 x = P ( Y + Z x ) ,
which leads to the following:
1 e λ 1 x = 0 x P ( Z < x t ) λ 2 e λ 2 t d t .
By a change of variable, we have the following:
1 e λ 1 x = 0 x P ( Z < z ) λ 2 e λ 2 ( x z ) d z .
Differentiating (6) with respect to x gives the following:
1 λ 2 λ 1 λ 2 e λ 1 x = P ( Z < x ) .
Thus,
Z = 0 with probability 1 λ 2 λ 1 λ 2 , K with probability λ 2 λ 1 λ 2 ,
where K is a random variable exponentially distributed with the parameter λ 1 .  □
The convolution order also possesses several valuable properties, two of which are presented in the following proposition. The proofs can be found in Shaked and Suarez-Llorens (2003).
Proposition 1. 
The following properties hold:
1. 
If X and Y are non-negative random variables, then Y c o n v X if and only if L X ( s ) L Y ( s ) is a completely monotone function in s.
2. 
If Y c o n v X , then Y s t X .

3. Optimizing the Expected Utility

As discussed in Section 1, it is imperative to elucidate how small variations of size h in the control parameter λ impact the resultant random variable X λ + h to effectively address Problem (2). In this context, the convolution order emerges as a pertinent analytical tool. It furnishes an avenue to understand the ramifications of altering the control parameter by revealing the existence of an intermediate positive random variable that encapsulates the influence on the target random variable.
To this end, consider a family of random variables denoted as ( X λ ) , λ R + . We assume that this family is increasing for the convolution order.2 In other words, for all λ , h R + , the relationship X λ c o n v X λ + h holds. In this expression, the parameter λ is the control parameter, wherein, for instance, augmenting the investment λ results in an amplified expected return on investment.
Given these assumptions, it naturally ensues that, for any λ and h in R + , there exists an intermediary variable denoted as Z λ , λ + h , such that X λ + h = d X λ + Z λ , λ + h . As previously mentioned, Z λ , λ + h serves to capture the distinctions between X λ + h and X λ . Thus, our focus lies in comprehensively investigating the random variable Z λ , λ + h , particularly its behavior as h tends towards zero ( h 0 ). This investigation aims to yield insights into the behavior of the derivative of the expected utility.

3.1. Some Preliminary Results

Utilizing the convolution order, for all λ , h R + , the random variable X λ + h can be decomposed as the sum of two independent variables, X λ and Z λ , λ + h . Consequently, it is natural to consider risk measures that satisfy ρ ( X + Y ) = ρ ( X ) + ρ ( Y ) for two independent variables X and Y. A well-known family of risk measures that adhere to this property is that of cumulants. We will show in Section 3.2 that cumulants can effectively capture the variations in the random variables ( X λ ) λ > 0 .
Definition 2 
(Cumulant-generating function). The cumulant-generating function of a random variable X is defined as follows:
g X ( t ) = log ( E ( e X t ) ) = n = 1 κ n ( X ) t n n ! .
The n-th cumulant of X is denoted by κ n ( X ) = g X ( n ) ( 0 ) , where g X ( n ) represents the n-th derivative of the function g X .
Cumulants possess several properties. Here, we present three of these properties without providing proofs (the first one is a direct consequence of the definition, the second one can be found in Smith (1995), and the third one is Theorem 12.1 in Gut (2013)). For a more comprehensive understanding, readers can refer to Gut (2013) for further details.
Proposition 2. 
Let n N * . The cumulants satisfy the following properties:
1. 
If X and Y are independent random variables, κ n ( X + Y ) = κ n ( X ) + κ n ( Y ) .
2. 
κ n ( X ) = m n ( X ) i = 1 n 1 n 1 i 1 κ i ( X ) m n i ( X ) , with m n ( X ) = E ( X n ) the n-th moment of X.
3. 
m n ( X ) and κ n ( X ) exist if and only if m = 1 m n 1 P ( X m ) < .
From the second and third propositions, it becomes evident that a necessary and sufficient condition for the existence of the n-th cumulant is the existence of the n-th moment. Therefore, for the rest of this paper, we will work under the following assumption:
Assumption 1. 
There exists Λ > 0 such that
Λ = min sup λ R + * such that for all n N , t > 0 , E ( X λ n e t X λ ) < , λ m a x
The quantity λ m a x R + * represents the maximum investment, as it is impractical for anyone to invest an infinite amount of money. One illustrative example is when λ m a x is the smallest λ for which C ( λ ) = 0 . While Assumption 1 might appear slightly stronger than the mere assumption that, for all n N , | E ( X λ n ) | < , it proves to be useful in establishing Proposition 6.
Cumulants play a crucial role in our problem given that the derivative of the n-th cumulant of X λ is intricately linked to the limit of the n-th cumulant of Z λ , λ + h , as stated in the following proposition.
Proposition 3. 
If ( X λ ) is a family of random variables increasing for the convolution order, for all λ < Λ and for all n N * , we have the following:
κ n ( X λ ) λ = lim h 0 κ n ( Z λ , λ + h ) h .
Proof. 
Let λ < Λ . From the first cumulant property follows, for all h > 0 ,
κ n ( X λ + h ) = κ n ( X λ + Z λ , λ + h ) = κ n ( X λ ) + κ n ( Z λ , λ + h ) .
Thus,
lim h 0 κ n ( X λ + h ) κ n ( X λ ) h = lim h 0 κ n ( Z λ , λ + h ) h ,
which ends the proof by the definition of the derivative.  □
More surprisingly, the next result shows that the derivative of the n-th cumulant of X λ is linked with the limit of the n-th moment of Z λ , λ + h .
Proposition 4. 
If ( X λ ) is an increasing family for the convolution order, for all λ < Λ and for all n N * , we have the following:
lim h 0 E ( Z λ , λ + h n ) = 0 .
Moreover,
lim h 0 E ( Z λ , λ + h n ) h = κ n ( X λ ) λ .
Proof. 
Let λ < Λ . Using Proposition 3, the following remains to be shown:
lim h 0 κ n ( Z λ , λ + h ) h = lim h 0 E ( Z λ , λ + h n ) h .
We proceed by induction. By the definition of ( X λ ) , for all h > 0 , we have the following:
E ( X λ + h ) E ( X λ ) = E ( Z λ , λ + h ) ,
which proves both results for n = 1 . Let N > 1 N * . We assume that the result is true for all n < N N * .
Using point 2 of Proposition 2, we have the following:
κ N ( Z λ , λ + h ) = m N ( Z λ , λ + h ) i = 1 N 1 N 1 i 1 κ i ( Z λ , λ + h ) m N i ( Z λ , λ + h ) .
From the induction hypothesis, it follows that
lim h 0 κ N ( Z λ , λ + h ) = lim h 0 m N ( Z λ , λ + h ) .
However,
lim h 0 κ N ( Z λ , λ + h ) = lim h 0 κ N ( X λ + h ) κ N ( X λ ) = 0 .
This demonstrates that as h approaches 0, lim h 0 m N ( Z λ , λ + h ) = 0 . Furthermore, as a consequence of Equation (16), this relation holds for all h > 0 , as follows:
κ N ( Z λ , λ + h ) h = m N ( Z λ , λ + h ) h i = 1 N 1 N 1 i 1 κ i ( Z λ , λ + h ) h m N i ( Z λ , λ + h ) .
Proposition 3 proves that, for all i < N , κ i ( Z λ , λ + h ) h h 0 κ i ( X λ ) λ R .
Moreover, by induction hypothesis, m N i ( Z λ , λ + h ) h 0 0 for all i < N .
Finally,
lim h 0 κ N ( Z λ , λ + h ) h = lim h 0 E ( Z λ , λ + h N ) h .
Propositions 3 and 4 collectively demonstrate that variations in X λ are effectively represented by the variable Z λ , λ + h . Consequently, examining Z λ , λ + h offers a means of studying X λ . These propositions find clear illustration in the case of exponential distributions.
Example 1. 
Let ( X λ ) λ > 0 be the family of exponentially distributed random variables; let n N * . For all λ [ 0 , Λ [ , h ] 0 , Λ λ ] , the random variable Z λ , λ + h is given by the following:
Z λ , λ + h = 0 with probability 1 h λ + h K with probability h λ + h ,
where K is a random variable exponentially distributed with the parameter λ. We can notice that the probability of Z λ , λ + h being positive approaches 0 when h decreases.
From Equation (21), it follows that
E ( Z λ , λ + h n ) = h λ + h E ( K n ) .
Thus, lim h 0 E ( Z λ , λ + h n ) = 0 . Moreover, since E ( K n ) = n ! λ n , we finally have the following:
κ n ( X λ ) λ = n ! λ n + 1 ,
since X λ is decreasing for the convolution order (see Proposition A1 in Appendix A).
We conclude this section by presenting several corollaries derived from Proposition 4. The first one highlights that when a family of random variables ( X λ ) increases in accordance with the convolution order, the level of “randomnes” can only escalate as the parameter λ increases.
Corollary 1. 
Let ( X λ ) λ be a family of random variables increasing for the convolution order. Then, we have the following:
1. 
κ n ( X λ ) λ 0 for all n N * .
2. 
If, for all λ [ 0 , Λ ] , X λ takes this value on N , then for all n N * , κ n ( X λ ) λ κ n + 1 ( X λ ) λ .
3. 
If κ 1 ( X λ ) λ < κ 2 ( X λ ) λ , then for all n N * , κ n ( X λ ) λ < κ n + 1 ( X λ ) λ . Moreover, lim n κ n ( X λ ) λ = .
Proof. 
Point 1 is a direct consequence of Proposition 4 since Z λ , λ + h is positive by definition.
Regarding point 2, if, for all λ [ 0 , Λ ] , X λ takes its value on N , then necessarily for all λ [ 0 , Λ [ , h ] 0 , Λ λ ] , Z λ , λ + h takes its value on N . Thus, for all n N * , for all λ [ 0 , Λ [ , h ] 0 , Λ λ ] , we have the following:
E ( Z λ , λ + h n ) = k = 1 k n P ( Z λ , λ + h = k ) k = 1 k n + 1 P ( Z λ , λ + h = k ) = E ( Z λ , λ + h n + 1 )
Proposition 4 finishes the demonstration.
As for point 3, the condition κ 1 ( X ) λ < κ 2 ( X ) λ together with Proposition 4 implies that there exists ε > 0 such that, for all h [ 0 , ε ] , E ( Z λ , λ + h ) E ( Z λ , λ + h 2 ) .
Newton’s binomial formula shows that since, for all n N * , λ [ 0 , Λ ] , E ( X λ n ) < , then for all h [ 0 , Λ λ ] , E ( Z λ , λ + h n ) < .
Moreover, E ( ln ( Z λ , λ + h ) 2 Z λ , λ + h n ) = 0 1 ln ( x ) 2 x n d P ( Z λ , λ + h ) + 1 ln ( x ) 2 x n d P ( Z λ , λ + h ) with
lim x 0 ln ( x ) 2 x n = 0 .
Since, for all x > 1 , ln ( x ) 2 < x 2 , we have the following:
1 ln ( x ) 2 x n d P ( Z λ , λ + h ) < 1 x n + 2 d P ( Z λ , λ + h ) E ( Z λ , λ + h n + 2 ) < .
Finally, both terms converge, and thus, E ( ln ( Z λ , λ + h ) 2 Z λ , λ + h n ) < .
This shows that the following function:
u : R + R α E ( Z λ , λ + h α )
can be differentiated twice, and its second derivative u ( α ) = E ( ln ( Z λ , λ + h ) 2 Z λ , λ + h α ) is non-negative since Z λ , λ + h is non-negative. Thus, the function u is convex. Because
E ( Z λ , λ + h ) < E ( Z λ , λ + h 2 ) ,
Rolle’s theorem shows that there exists c [ 1 , 2 ] such that α > c , u ( α ) > 0 . This and Proposition 4 show that, for all n N * , κ n ( X ) λ < κ n + 1 ( X ) λ , and since α > c , u ( α ) > 0 , lim n κ n ( X ) λ = .  □

3.2. Deriving the Expected Utility

In this section, we delve into the analysis of a rational agent characterized by a Von Neumann and Morgenstern utility function, denoted as u. The agent controls an investment parameter denoted as λ , which generates a corresponding profit denoted as X λ . It is important to note that the family of random variables ( X λ ) λ 0 is assumed to be increasing with respect to the convolution order. Additionally, the agent takes into account a deterministic function C ( λ ) , encompassing various deterministic parameters such as investment costs or initial wealth considerations. The primary objective of the agent is to address the following optimization problem:
max λ 0 E [ u ( X λ C ( λ ) ) ]
Initially, we consider the scenario in which the utility function u is an entire function.3 This assumption implies that the function u ( x ) coincides with its Taylor series for all x R . Noteworthy cases, such as exponential and polynomial utility functions, fall under this category. Additionally, we assume the differentiability of the function C, signifying that a derivative exists for C at every point. The ensuing theorem furnishes an explicit expression for the derivative of E [ u ( X λ C ( λ ) ) ] .
Proposition 5. 
If u is entire, C is differentiable, and ( X λ ) λ 0 is an increasing random variable family for the convolution order, then, for all λ [ 0 , Λ ] , E ( u ( X λ C ( λ ) ) ) λ exists and
E ( u ( X λ C ( λ ) ) ) λ = C ( λ ) E ( u ( X λ C ( λ ) ) ) + n = 1 κ n ( X λ ) λ E ( u ( n ) ( X λ C ( λ ) ) ) n ! .
Proof. 
Let λ [ 0 , Λ ] . Since ( X λ ) is increasing for the convolution order, there exists h > 0 such that
X λ + h C ( λ + h ) = d X λ + Z λ , λ + h C ( λ ) C ( λ + h ) C ( λ ) .
Composing Equation (27) by u and using its Taylor development gives the following:
u ( X λ + h C ( λ + h ) ) = d u ( X λ C ( λ ) ) + n = 1 u ( n ) ( X λ C ( λ ) ) n ! Z λ , λ + h + C ( λ ) C ( λ + h ) n
Taking the expectation in (28) and dividing by h gives the following:
E ( u ( X λ + h C ( λ + h ) ) ) E ( u ( X λ C ( λ ) ) ) h = E n = 1 u ( n ) ( X λ C ( λ ) ) n ! h Z λ , λ + h + C ( λ ) C ( λ + h ) n .
According to Bourbaki (2007), Corollary 2 p. 144, in order to swap the expectation and the sum in Equation (29), a sufficient condition is to show that
n = 1 u ( n ) ( X λ C ( λ ) ) n ! h Z λ , λ + h + C ( λ ) C ( λ + h ) n
is convergent, and that there exists a function g such that, for all N N , we have the following:
n = 1 N u ( n ) ( X λ C ( λ ) ) n ! h Z λ , λ + h + C ( λ ) C ( λ + h ) n g .
The first condition comes directly from the fact that u is an entire function. Regarding the second point, we know the following:
n = 1 u ( n ) ( X λ C ( λ ) ) n ! Z λ , λ + h + C ( λ ) C ( λ + h ) n = | u ( X λ + h C ( λ + h ) ) u ( X λ C ( λ ) ) | .
Let us choose ε > 0 . Since n = 1 N u ( n ) ( X λ C ( λ ) ) n ! Z λ , λ + h + C ( λ ) C ( λ + h ) n converges, there exists N ˜ N such that, for all N > N ˜ ,
n = 1 N u ( n ) ( X λ C ( λ ) ) n ! Z λ , λ + h + C ( λ ) C ( λ + h ) n | u ( X λ + h C ( λ + h ) ) u ( X λ C ( λ ) ) + ε | .
Let us define the following random variable:
g = max g 1 , g 2 ,
where
g 1 = | u ( X λ + h C ( λ + h ) ) u ( X λ C ( λ ) ) + ε | g 2 = max N 1 , , N ˜ n = 1 N u ( n ) ( X λ C ( λ ) ) n ! Z λ , λ + h + C ( λ ) C ( λ + h ) n
By the definition of g, for all N N , n = 1 N u ( n ) ( X λ C ( λ ) ) n ! Z λ , λ + h + C ( λ ) C ( λ + h ) n g . Furthermore, since g is the maximum of a finite number of integrable random variables, it follows that g itself is integrable. This property enables us to interchange the summation and the expectation in Equation (29).
Using Newton’s binomial theorem and taking the limits in (29) leads to the following:
E ( u ( X λ C ( λ ) ) λ = lim h 0 n = 1 E [ u ( n ) ( X λ C ( λ ) ) ] n ! A h , n ( λ ) ,
where
A h , n ( λ ) = k = 0 n n k E ( Z λ , λ + h k ) h C ( λ ) C ( λ + h ) n k .
We know from Proposition 4 that, for all k > 0 , lim h 0 E ( Z λ , λ + h k ) h = κ k ( X λ ) λ .
Thus, we have the following:
lim h 0 k = 1 n n k E ( Z λ , λ + h k ) h C ( λ ) C ( λ + h ) n k = lim h 0 E ( Z λ , λ + h n ) h
Moreover, when k = 0 , using L’Hôpital’s rule, we have the following:
lim h 0 C ( λ ) C ( λ + h ) n h = lim h 0 n C ( λ ) C ( λ + h ) n 1 C ( λ + h ) ,
which is equal to C ( λ ) if n = 1 , 0 otherwise. Combining Equations (34) and (35) leads to the following:
lim h 0 A h , n ( λ ) = κ n ( X λ ) λ C ( λ ) 1 n = 1 .
Incorporating Equation (36) into Equation (32) gives the following:
E [ u ( X λ C ( λ ) ) ] λ = C ( λ ) E [ u ( X λ C ( λ ) ) ] + n = 1 κ n ( X λ ) λ E [ u ( n ) ( X λ C ( λ ) ) ] n ! .
This result demonstrates that the first derivative of the expectation depends on all the cumulants of X λ .
While the prior results encompass several typical utility functions, they do not cover functions such as CRRA (Constant Relative Risk Aversion). However, the proposition presented next demonstrates that deriving a formula remains feasible for power utility functions, expanding the applicability of our findings.
Proposition 6. 
If C is differentiable, ( X λ ) λ 0 is an increasing random variable family for the convolution order, and the following infinite sum exists:
n = k ( 1 ) n k κ n ( X λ ) λ E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n !
then, for all λ [ 0 , Λ ] , for all α = k δ , k N * , δ ] 0 , 1 ] ,
E ( ( X λ C ( λ ) ) α ) λ = C ( λ ) α E ( ( X λ C ( λ ) ) α 1 ) + n = 1 k κ n ( X λ ) λ α n E ( ( X λ C ( λ ) ) α n ) + n = k + 1 ( 1 ) n k κ n ( X λ ) λ E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n !
Proof. 
The expansion in a power series of ( a + b ) α is only valid when a < b or b < a . As we lack additional assumptions to ensure the existence of h > 0 such that Z λ , λ + h < X λ (refer to Example 1 for a counterexample), the approach used in Proposition 5 is not applicable in this case.
For λ [ 0 , Λ ] , the quantity m α ( X λ ) = E ( ( X λ C ( λ ) ) α ) is the moment of order α of the random variable ( X λ C ( λ ) ) . These non-integer moments have been studied by Cressie and Borkent (1986), and the proof of Proposition 6 heavily relies on their approach.
This approach relies on the moment-generating function of X λ , denoted as M X λ ( t ) = E ( e t X λ ) . It is well known that, for all n N , the n-th derivative of M X λ at time t = 0 is given by the following:
d n M X λ d t n ( 0 ) = m n ( X λ ) .
Cressie and Borkent (1986) introduced fractional calculus to extend this result to α R * . To achieve this, the authors employed a derivative operator D such that, for all c R + * , D α e c t D t α = c α e c t . To achieve this property, the authors worked within the framework of Weyl fractional calculus, also known as Liouville–Weyl fractional calculus.
Let f be a function C ( R ) such that, for all p [ 1 ; ] , all t < , t | f ( x ) | p < . Following Kilbas et al. (1993), we can then define the Weyl integral of order μ , μ > 0 , as follows:
D μ D t μ f ( t ) Γ ( μ ) 1 t ( t z ) μ 1 f ( z ) d z .
Here, Γ denotes the Gamma function, i.e.,
Γ ( x ) = 0 t x 1 e t d t .
We also define the Weyl derivative of the order α = k δ , with k N * and δ ] 0 , 1 ] , as follows:
D α f ( t ) D t α Γ ( δ ) 1 t ( t z ) δ 1 d k f ( z ) / d z k d z .
When α = k , we retrieve the usual derivative. In the following, we are interested in computing the following derivative:
E ( ( X λ C ( λ ) ) α ) λ ,
for a real α > 0 .
The proof of Proposition 6 involves the computation of D α t n e c t D t α ( 0 ) using Weyl fractional derivatives. This outcome is a specific instance of a result presented in Raina (1986). However, both the original proof and formula provided by Raina (1986) are considerably intricate. The subsequent lemma furnishes a more straightforward proof tailored to our specific context and need.  □
Lemma 2. 
Let n , k N * , δ ] 0 , 1 ] , α = k δ , c R + . Then, if k < n ,
D α t n e c t D t α ( 0 ) = ( 1 ) n k c α n Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) i f   0 < δ < 1 , 0 i f   δ = 1 .
If k n ,
D α t n e c t D t α ( 0 ) = c α n Γ ( α + 1 ) Γ ( α n + 1 ) .
Proof. 
By definition,
D α t n e c t D t α ( 0 ) = 1 Γ ( δ ) 0 ( z ) δ 1 D k z n e c z D z k d z .
Using the Leibniz formula, we have the following:
D α t n e c t D t α ( 0 ) = 1 Γ ( δ ) j = 0 min ( n , k ) k j 0 ( z ) δ 1 n ! n j ! z n j c k j e c z d z .
Using the variable change u = c z in the integral leads to the following:
D α t n e c t D t α ( 0 ) = 1 Γ ( δ ) j = 0 m i n ( n , k ) k j c k j n ! n j ! ( 1 ) n j 0 u c n j + δ 1 e u d u c .
The definition of the Γ function gives the following:
D α t n e c t D t α ( 0 ) = c k n δ 1 Γ ( δ ) j = 0 m i n ( n , k ) k j n ! n j ! ( 1 ) n j Γ ( n j + δ ) .
Let us suppose now that n k . Taking into account the following relation:
Γ ( n j + δ ) Γ ( δ ) ( n j ) ! = n j + δ 1 n j = ( 1 ) n j δ n j
we find the following:
D α t n e c t D t α ( 0 ) = c k n δ n ! j = 0 n k j δ n j .
Applying the Chu–Vandermonde identity4 shows the following:
D α t n e c t D t α ( 0 ) = c k n δ n ! k δ n ,
which finally leads to the following:
D α t n e c t D t α ( 0 ) = c k n δ Γ ( α + 1 ) Γ ( α n + 1 ) ,
which proves the relation (44) of Lemma (2).
Similarly, if k n , we can rewrite Equation (48) as follows:
D α t n e c t D t α ( 0 ) = ( 1 ) n k c k n δ Γ ( n k + δ ) Γ ( k + 1 ) Γ ( δ ) j = 0 k n j k n δ k j .
Using the Chu–Vandermonde identity gives the following:
D α t n e c t D t α ( 0 ) = ( 1 ) n k c k n δ Γ ( n k + δ ) Γ ( k + 1 ) Γ ( δ ) k δ k .
From the last expression, we can see that, for δ = 1 , the derivative is zero since
k 1 k = 0 ,
by the definition of the binomial coefficient. In case 0 < δ < 1 , we find the following:
D α t n e c t D t α ( 0 ) = ( 1 ) n k c k n δ Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) .
With Lemma 2, we can now prove Proposition 6. We have the following:
D E ( ( X λ C ( λ ) ) α ) D λ = D D λ D α M ( X λ C ( λ ) ) D t α ( λ , 0 )
It is possible to swap the derivative operators because of the following:
  • There exists l > 0 such that, for all k N * , 0 < t < l , 0 λ Λ , the integral
    t ( t z ) δ 1 E ( ( X λ C ( λ ) ) k e z ( X λ C ( λ ) ) ) d z
    is convergent. This is a natural consequence of the convergence of a Weyl derivative, which is proven by Proposition 5 of Cressie and Borkent (1986) when Assumption 1 holds.
  • For all k N * , t > 0 , E ( ( X λ C ( λ ) ) k e t ( X λ C ( λ ) ) ) can be differentiated with respect to λ since it is an entire function (Proposition 5).
  • There exist l > 0 and a function g > 0 such that, for all k N * , z > 0 , y ] 0 , l [ , λ ] 0 , Λ [ ,
    E ( ( X λ C ( λ ) ) k e z ( X λ C ( λ ) ) ) λ ( t z ) δ < g ( z ) .
    Indeed, since [ 0 , Λ ] is a compact and that E ( ( X λ C ( λ ) ) k e z ( X λ C ( λ ) ) ) λ ( t z ) δ is finite for all λ [ 0 , Λ ] , it suffices to choose the following:
    g ( z ) = max λ [ 0 , Λ ] E ( ( X λ C ( λ ) ) k e z ( X λ C ( λ ) ) ) λ ( t z ) δ .
Thus, we have the following:
D E ( ( X λ C ( λ ) ) α ) D λ = D α D t α D E ( e t ( X λ C ( λ ) ) ) D λ ( λ , 0 ) .
Since the function x e t x is an entire function, and the family ( X λ ) is ordered in the sense of the convolution order, it is possible to apply Proposition 5 to compute D E ( e t ( X λ C ( λ ) ) ) D λ . We have the following:
D E ( ( X λ C ( λ ) ) α ) D λ = D α D t α C ( λ ) E ( t e t ( X λ C ( λ ) ) ) + n = 1 κ n ( X λ ) λ E ( t n e t ( X λ C ( λ ) ) ) n ! ( λ , 0 )
Fubini’s theorem ensures that the expectation and the Weyl derivative operator can commute. Moreover, assuming that the sum (38) exists allows for distributing the Weyl derivative across the series, thanks to Fubini’s theorem and Lemma 2. Then, we have the following:
D E ( ( X λ C ( λ ) ) α ) D λ = C ( λ ) E ( D α D t α ( t e t ( X λ C ( λ ) ) ) ) + n = 1 κ n ( X λ ) λ E ( D α D t α ( t n e t ( X λ C ( λ ) ) ) ) n ! ( λ , 0 )
Since k 1 , using Lemma 2 in Equation (59) provides the following:
D E ( ( X λ C ( λ ) ) α ) D λ = C ( λ ) α E ( ( X λ C ( λ ) ) α 1 ) + n = 1 k κ n ( X λ ) λ α n E ( ( X λ C ( λ ) ) α n ) + n = k + 1 ( 1 ) n k κ n ( X λ ) λ E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n !
Remark 1. 
An example of a sufficient condition for
n = k ( 1 ) n k κ n ( X λ ) λ E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n !
to exist is when the sequence ( κ n ( X λ ) λ ) is non-increasing, and C ( λ ) < 1 .
An interesting situation emerges when considering the family ( X λ ) of Poisson distributions parameterized by λ . This family is well documented to exhibit an increasing order in the convolution sequence. Specifically, for an increment h, Z λ , λ + h follows a Poisson distribution with the parameter h. Furthermore, for all n N * , the cumulant κ n ( X λ ) equals λ , a relationship established in Patil (1963).
Propositions 5 and 6 lead to the following corollary.
Corollary 2. 
If u is entire, C is differentiable, and ( X λ ) λ 0 is the family of Poisson distribution of the parameter λ, then, for all λ < Λ ,
E ( u ( X λ C ( λ ) ) ) λ = C ( λ ) E ( u ( X λ C ( λ ) ) ) + E ( u ( X λ C ( λ ) + 1 ) ) E ( u ( X λ C ( λ ) ) ) .
Moreover, if C ( λ ) 1 , for all α = k δ , k 1 , δ ] 0 , 1 [ ,
E ( X λ C ( λ ) ) ) α ) λ = C ( λ ) α E ( ( X λ C ( λ ) ) α 1 ) + n = 1 k 1 α n E ( ( X λ C ( λ ) ) α n ) + E ( X λ C ( λ ) + 1 ) α .
Proof. 
Since ( X λ ) is the Poisson distribution family, for all n > 0 , κ n ( X λ ) λ = 1 .
Thus, Equation (61) follows directly from Proposition 5 and the fact that u is an entire function, because u ( X λ C ( λ ) + 1 ) = u ( X λ C ( λ ) ) + n = 1 u ( n ) ( X λ C ( λ ) ) n ! .
Moreover, for all x [ 1 , 1 ] ,
( 1 + x ) δ = n = 0 + ( δ ) ( δ 1 ) . . . ( δ n + 1 ) x n n ! = n = 0 + ( 1 ) n Γ ( n + δ ) Γ ( δ ) x n n !
Integrating Equation (63) k times gives for all x [ 1 , 1 ] the following:
( 1 + x ) α Γ ( 1 δ ) Γ ( α + 1 ) = n = 0 + ( 1 ) n Γ ( n + δ ) Γ ( δ ) x n + k ( n + k ) ! .
Proposition 6 leads to the following:
E ( X λ C ( λ ) ) ) α ) λ = C ( λ ) α E ( ( X λ C ( λ ) ) α 1 ) + n = 1 k 1 α n E ( ( X λ C ( λ ) ) α n ) + n = k ( 1 ) n k E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n ! .
Moreover, we have the following:
n = k ( 1 ) n k E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n ! = E ( X λ C ( λ ) ) α Γ ( α + 1 ) Γ ( 1 δ ) n = 0 ( 1 ) n ( X λ C ( λ ) ) n k Γ ( n + δ ) Γ ( δ ) ( n + k ) ! .
The assumption C ( λ ) < 1 ensures that 1 ( X λ C ( λ ) ) < 1 ; thus, using Equation (64) gives the following:
n = k ( 1 ) n k E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n ! = E ( X λ C ( λ ) + 1 ) α ,
which ends the demonstration.  □
It is worth noting that the condition C ( λ ) 1 holds significant economic implications, particularly when considering an agent with an initial wealth. To elaborate, if an economic agent starts with an initial wealth or income of W, the function C ( λ ) can be reformulated as C ( λ ) = C ˜ ( λ ) W , where C ˜ ( λ ) represents a monotonically increasing positive function. Consequently, the condition C ( λ ) 1 is tantamount to asserting C ˜ ( λ ) W 1 . Thus, this assumption effectively signifies that the agent is precluded from borrowing money to facilitate his initial investment.
If the agent makes his choice according to a power utility function and is risk adverse (i.e., α < 1 ), Equation (62) reduces to the following:
E ( ( X λ C ( λ ) ) α ) λ = C ( λ ) α E ( ( X λ C ( λ ) ) α 1 ) + E ( X λ C ( λ ) + 1 ) α
This formula can be useful, especially since, to the best of the authors’ knowledge, there exists no closed formula for E ( ( X λ C ( λ ) ) α ) in the case where X λ follows a Poisson distribution.

4. Economic Example: Competitive Firm under Uncertainty

In this section, we explore the practical application of the derived formulas to a classical economic problem. We delve into the optimization of production quantity for a company seeking to determine the optimal output of goods. Our analysis extends a finding from Sandmo (1971), which asserts that, in an uncertain context, companies tend to produce fewer goods compared with a context with certainty.
In this illustrative example, we consider a firm aiming to maximize its anticipated profit, as detailed in Sandmo (1971). The firm’s objective is to produce a quantity λ of a good, each unit of which has a selling price of p. The company faces two types of costs: fixed costs, denoted as B (covering expenditures like wages and rent), and variable operational costs, represented by the function C ( λ ) . We assume that C is twice differentiable and increasing. The firm’s decision-making process aligns with a Von Neumann and Morgenstern utility function u.
In the traditional deterministic model, the firm seeks to maximize the expression u ( p λ C ( λ ) B ) . Given the following:
u ( p λ C ( λ ) B ) λ = ( p C ( λ ) ) u ( p λ C ( λ ) B ) ,
the solution entails finding λ such that C ( λ ) = p , subject to the fulfillment of the second-order condition.
The following two generalizations of this model are particularly notable:
  • The first approach, proposed by Sandmo (1971), introduces randomness to the price P. However, in Sandmo’s variation, the price P remains independent of λ . In our framework, an even broader generalization could encompass an imperfect market scenario, where the firm’s production influences the market price of the good: greater production could lead to lower prices.
  • The second approach to broadening the classical model entails treating the quantity of produced goods as uncertain. Numerous random factors, such as strikes, absenteeism, or mechanical failures, could impact production. Let the family of random variables ( X λ ) represent the total production, assumed to be increasing based on the convolution order. Consequently, ( p X λ ) also constitutes an increasing family of random variables following the convolution order. Notably, the family ( X λ ) = λ adheres to all our assumptions, thereby rendering the classical model a special instance of this generalized model.
In order to contribute a novel model to the existing literature and enrich the current understanding rather than weakening assumptions, we opt to delve into the second approach. In this model, the firm seeks to maximize the following expression:
E ( u ( p X λ C ( λ ) B ) ) .
When u is an entire function, Proposition 5 provides insights into the derivative of the expected utility.
E ( u ( p X λ C ( λ ) B ) ) λ = C ( λ ) E ( u ( p X λ C ( λ ) B ) ) + n = 1 p n κ n ( X λ ) λ E ( u ( n ) ( p X λ C ( λ ) B ) ) n ! ,
and thus,
E ( u ( p X λ C ( λ ) B ) ) λ = ( p E ( X λ ) λ C ( λ ) ) E ( u ( p X λ C ( λ ) B ) ) + n = 2 p n κ n ( X λ ) λ E ( u ( n ) ( p X λ C ( λ ) B ) ) n ! .
Remark 2. 
Equation (72) shows that if X λ can be expressed as X λ = X + f ( λ ) , where X is a random variable and f is a deterministic real function, then the model simplifies to the deterministic scenario.
In order to compare these results with the classical deterministic model, it is possible to impose E ( X λ ) = λ . This assumption is akin to the one made by Sandmo when the author compared the case of a stochastic price with a mean of p with the classical model. This assumption is satisfied, for example, when X λ = λ ; when X λ follows a Poisson distribution with the parameter λ ; when X λ follows a Gaussian distribution with a mean λ ; or when X λ follows a negative binomial distribution with the parameters λ and 1 2 .
In this case, the derivative simplifies to the following:
E ( u ( p X λ C ( λ ) B ) ) λ = ( p C ( λ ) ) E ( u ( p X λ C ( λ ) B ) ) + n = 2 p n κ n ( X λ ) λ E ( u ( n ) ( p X λ C ( λ ) B ) ) n ! .
By comparing (69) and (73), we note that the general case (73) is similar to the one obtained in the classical case (69) but with the presence of the following additional term:
n = 2 p n κ n ( X λ ) λ E ( u ( n ) ( p X λ C ( λ ) B ) ) n ! .
This sum heavily relies on the cumulant derivatives κ n ( X λ ) λ , which capture the randomness of the process X λ .
Since u is entire, u is absolutely continuous; thus using the integral form of the remainder of the Taylor formula gives, for all λ [ 0 , Λ ] , for all h [ 0 , Λ λ ] , the following:
1 h n = 2 ( p Z λ , λ + h ) n E ( u ( n ) ( p X λ C ( λ ) B ) ) n !
= 1 h p X λ C ( λ ) B p X λ C ( λ ) B + Z λ , λ + h u ( t ) 2 ! ( p X λ C ( λ ) B + Z λ , λ + h t ) d t .
If, as supposed by Sandmo, u is concave, then Equation (74) shows that, for all λ , h > 0 :
1 h n = 2 ( p Z λ , λ + h ) n E ( u ( n ) ( p X λ C ( λ ) B ) ) n ! 0 .
Taking the expectancy and the limits shows the following:
n = 2 p n κ n ( X λ ) λ E ( u ( n ) ( p X λ C ( λ ) B ) ) n ! 0 .
Consequently, if E ( u ( p X λ C ( λ ) B ) ) λ is decreasing (implying that 2 E ( u ( p X λ C ( λ ) B ) ) λ 2 0 ), then the solution E ( u ( p X λ C ( λ ) B ) ) λ = 0 is smaller than the solution of the following equation:
( p C ( λ ) ) E ( u ( p X λ C ( λ ) B ) )
Similar to Sandmo (1971), which stated that “under price uncertainty, the output is smaller than the certainty output”, it is thus possible to state the following proposition:
Proposition 7. 
If the random output is increasing for the convolution order, and if the utility function u is entire and concave, then, under production uncertainty, the output is smaller than the certainty output.

5. Discussion

The situation where an agent must make a decision concerning a parameter λ , which will have an uncertain return X λ , is common. Given such a situation, expected utility theory suggests that the agent will try to maximize E ( u ( X λ C ( λ ) ) ) , where the function C represents the cost associated with the choice of the parameter λ . However, since there were no mathematical tools to compute the derivative E ( u ( X λ C ( λ ) ) ) λ , economists studying this kind of problem have had to impose very strong constraints on X λ in order to obtain results (e.g., assuming that X λ follows a Bernoulli distribution). The goal of this paper is to provide mathematical formulas that allow for the computation of E ( u ( X λ C ( λ ) ) ) λ in a much more general framework.
This has been achieved by supposing that the returns X λ are ordered with respect to the convolution order. Even though this condition implies that the returns are also ordered with respect to the first stochastic order, it remains a reasonable assumption for two reasons. First, it offers a natural economic interpretation: saying that X λ 1 X λ 2 means that an agent will be indifferent between X λ 1 and X λ 2 if and only if a non-negative quantity is added to X λ 1 (if this quantity is stochastic, it must be independent of X λ 1 ). Second, most common distributions, such as the normal, Gamma, Poisson, or exponential distributions, are ordered with respect to the convolution order.
Given this hypothesis, theoretical formulas have been provided to compute E ( u ( X λ C ( λ ) ) ) λ in cases where the utility function u is either entire or a power function. As shown using Sandmo’s model, which addresses the production of goods by firms, these formulas can be used to question or extend some results from the literature. In particular, the formula simplifies dramatically for a Poisson distribution, which is of practical interest because the Poisson distribution is widely used—notably in insurance—and theoretically significant because there is no general formula to compute the moment of order α R of a Poisson distribution and, by extension, the derivative of such a moment.
Further research could involve applying this model to practical situations. For example, it could be used by an insurance company implementing a prevention plan to reduce the occurrence of claims, which follows a Poisson process. Another potential application could be a financial institution attempting to determine an optimal risk level. A third possible case could involve modeling the trade-off between carbon emissions and profitability.

Author Contributions

Conceptualization, R.G. and K.B.; methodology, R.G. and K.B.; writing—original draft preparation, R.G. and K.B.; writing—review and editing, R.G. and K.B. All authors have read and agreed to the published version of the manuscript.

Funding

Karim Barigou was funded by the AXA Research Fund. Romain Gauchon was supported by the Prevent’Horizon Chair, which is sponsored by the Risk Foundation Louis Bachelier and is in collaboration with Claude Bernard Lyon 1 University, Actuaris, AG2R La Mondiale, G2S, Covea, Groupama Gan Vie, Groupe Pasteur Mutualité, Harmonie Mutuelle, Humanis Prévoyance, and La Mutuelle Générale.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors express their gratitude to Sarah Bensalem, Clément Deslandes, and Pierre Montesinos for providing valuable and detailed feedback that significantly enhanced the quality of this current manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Since the proofs are similar to the ones in the increasing case, only the main results are presented here.
Proposition A1. 
If ( X λ ) is a family of random variables decreasing for the convolution order, for all λ < Λ and for all n N * , we have the following:
1. 
κ n ( X λ ) λ = lim h 0 κ n ( Z λ , λ + h ) h
2. 
lim h 0 E ( Z λ , λ + h n ) = 0
3. 
κ n ( X λ ) λ = lim h 0 E ( Z λ , λ + h n ) h
Proposition A2. 
If u is entire, C is differentiable, and ( X λ ) λ 0 is a decreasing random variable family for the convolution order, then, for all λ < Λ , E ( u ( X λ C ( λ ) ) ) λ exists and
E ( u ( X λ C ( λ ) ) ) λ = C ( λ ) E ( u ( X λ C ( λ ) ) ) + n = 1 κ n ( X λ ) λ E ( u ( n ) ( X λ C ( λ ) ) ) n ! .
Proposition A3. 
If C is differentiable, ( X λ ) λ 0 is a decreasing random variable family for the convolution order, and n = k ( 1 ) n k κ n ( X λ ) λ E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n ! exists, then, for all λ, for all α = k δ > 0 , k N * , δ ] 0 , 1 ] ,
E ( ( X λ C ( λ ) ) α ) λ = C ( λ ) α E ( ( X λ C ( λ ) ) α 1 ) + n = 1 k κ n ( X λ ) λ α n E ( ( X λ C ( λ ) ) α n ) + n = k + 1 ( 1 ) n k κ n ( X λ ) λ E ( ( X λ C ( λ ) ) α n ) Γ ( n α ) Γ ( α + 1 ) Γ ( δ ) Γ ( 1 δ ) n !

Notes

1
For a comprehensive overview, we refer to Quiggin (2012).
2
The scenario of a decreasing family is also relevant to model risk reduction strategies for an agent. This case closely resembles the increasing case and is expanded upon solely in Appendix A.
3
The term entire is typically associated with complex functions, but it can be readily adapted to real functions without any complications.
4
We recall the Chu–Vandermonde identity: if s, t R , n N , then
s + t n = k = 0 n s k t n k

References

  1. Altonji, Joseph G. 1993. The demand for and return to education when education outcomes are uncertain. Journal of Labor Economics 11, Pt 1: 48–83. [Google Scholar] [CrossRef]
  2. Bensalem, Sarah, Nicolás Hernández Santibáñez, and Nabil Kazi-Tani. 2020. Prevention efforts, insurance demand and price incentives under coherent risk measures. Insurance: Mathematics and Economics 93: 369–86. [Google Scholar] [CrossRef]
  3. Bourbaki, Nicolas. 2007. Intégration: Chapitres 1 à 4. Berlin/Heidelberg: Springer Science & Business Media. [Google Scholar]
  4. Castaño-Martínez, A., F. López-Blázquez, and B. Salamanca-Miño. 2013. On the convolution order of weak records. Journal of Statistical Planning and Inference 143: 107–15. [Google Scholar] [CrossRef]
  5. Courbage, Christophe. 2001. Self-insurance, self-protection and market insurance within the dual theory of choice. The GENEVA Papers on Risk and Insurance-Theory 26: 43–56. [Google Scholar] [CrossRef]
  6. Cressie, Noel, and Marinus Borkent. 1986. The moment generating function has its moments. Journal of Statistical Planning and Inference 13: 337–44. [Google Scholar] [CrossRef]
  7. Denuit, Michel, Jan Dhaene, Marc Goovaerts, and Rob Kaas. 2006. Actuarial Theory for Dependent Risks: Measures, Orders and Models. Hoboken: John Wiley & Sons. [Google Scholar]
  8. Ehrlich, Isaac, and Gary S. Becker. 1972. Market insurance, self-insurance, and self-protection. Journal of Political Economy 80: 623–48. [Google Scholar] [CrossRef]
  9. Gut, Allan. 2013. Probability: A Graduate Course. Berlin/Heidelberg: Springer Science & Business Media, vol. 75. [Google Scholar]
  10. Kilbas, Anatoly A., Oleg I. Marichev, and Stefan G. Samko. 1993. Fractional Integrals and Derivatives (Theory and Applications). Montreux: Gordon and Breach. [Google Scholar]
  11. Klemperer, Paul D., and Margaret A. Meyer. 1989. Supply function equilibria in oligopoly under uncertainty. Econometrica: Journal of the Econometric Society 57: 1243. [Google Scholar] [CrossRef]
  12. Lee, Kangoh. 1998. Risk aversion and self-insurance-cum-protection. Journal of Risk and Uncertainty 17: 139–51. [Google Scholar] [CrossRef]
  13. Loewenstein, Mark A., and James R. Spletzer. 1998. Dividing the costs and returns to general training. Journal of Labor Economics 16: 142–71. [Google Scholar] [CrossRef]
  14. Morgenstern, Oskar, and John Von Neumann. 1953. Theory of Games and Economic Behavior. Princeton: Princeton University Press. [Google Scholar]
  15. Patil, G. P. 1963. A characterization of the exponential-type distribution. Biometrika 50: 205–7. [Google Scholar]
  16. Quiggin, John. 2012. Generalized Expected Utility Theory: The Rank-Dependent Model. Berlin/Heidelberg: Springer Science & Business Media. [Google Scholar]
  17. Raina, R. K. 1986. The weyl fractional operator of a system of polynomials. Rendiconti del Seminario Matematico della Università di Padova 76: 171–76. [Google Scholar]
  18. Sandmo, Agnar. 1971. On the theory of the competitive firm under price uncertainty. The American Economic Review 61: 65–73. [Google Scholar]
  19. Shaked, Moshe, and Alfonso Suarez-Llorens. 2003. On the comparison of reliability experiments based on the convolution order. Journal of the American Statistical Association 98: 693–702. [Google Scholar] [CrossRef]
  20. Smith, Peter J. 1995. A recursive formulation of the old problem of obtaining moments from cumulants and vice versa. The American Statistician 49: 217–18. [Google Scholar] [CrossRef]
  21. Yaari, Menahem E. 1987. The dual theory of choice under risk. Econometrica: Journal of the Econometric Society 55: 95. [Google Scholar] [CrossRef]
  22. Yang, Xiao-Jun. 2019. General Fractional Derivatives: Theory, Methods and Applications. Boca Raton: CRC Press. [Google Scholar]
  23. Zhang, Ying. 2018. Stochastic orders for convolution of heterogeneous gamma and negative binomial random variables. arXiv arXiv:1811.11360. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gauchon, R.; Barigou, K. Expected Utility Optimization with Convolutional Stochastically Ordered Returns. Risks 2024, 12, 95. https://doi.org/10.3390/risks12060095

AMA Style

Gauchon R, Barigou K. Expected Utility Optimization with Convolutional Stochastically Ordered Returns. Risks. 2024; 12(6):95. https://doi.org/10.3390/risks12060095

Chicago/Turabian Style

Gauchon, Romain, and Karim Barigou. 2024. "Expected Utility Optimization with Convolutional Stochastically Ordered Returns" Risks 12, no. 6: 95. https://doi.org/10.3390/risks12060095

APA Style

Gauchon, R., & Barigou, K. (2024). Expected Utility Optimization with Convolutional Stochastically Ordered Returns. Risks, 12(6), 95. https://doi.org/10.3390/risks12060095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop