Next Article in Journal
Empirical and Numerical Analysis of an Opaque Ventilated Facade with Windows Openings under Mediterranean Climate Conditions
Next Article in Special Issue
Sensitivity Analysis of Optimal Commodity Decision Making with Neural Networks: A Case for COVID-19
Previous Article in Journal
Comparing Multi-Objective Local Search Algorithms for the Beam Angle Selection Problem
Previous Article in Special Issue
A Macroeconomic SIR Model for COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Risk Sharing in Society

Department of Business and Management Science, Norwegian School of Economics, 5045 Bergen, Norway
Mathematics 2022, 10(1), 161; https://doi.org/10.3390/math10010161
Submission received: 17 November 2021 / Revised: 16 December 2021 / Accepted: 19 December 2021 / Published: 5 January 2022
(This article belongs to the Special Issue The Mathematics of Pandemics: Applications for Insurance)

Abstract

:
We consider risk sharing among individuals in a one-period setting under uncertainty that will result in payoffs to be shared among the members. We start with optimal risk sharing in an Arrow–Debreu economy, or equivalently, in a Borch-style reinsurance market. From the results of this model we can infer how risk is optimally distributed between individuals according to their preferences and initial endowments, under some idealized conditions. A main message in this theory is the mutuality principle, of interest related to the economic effects of pandemics. From this we point out some elements of a more general theory of syndicates, where in addition, a group of people are to make a common decision under uncertainty. We extend to a competitive market as a special case of such a syndicate.
JEL Classification:
G10; G12; D9; D51; D53; D90; E21

1. Introduction

We analyze optimal risk sharing in society at large. We consider a one-period model of uncertainty where Pareto optimal risk sharing, or equivalently, optimal consumption is characterized. The article is primarily a review paper, where the possible originality is in the presentation and composition of the various subject matters, in some of the proofs, in extensions, and within various applications that are pointed out.
We discuss the no-arbitrage question and the associated state prices, both issues important in the financial literature, and also in optimal risk sharing. Here we use the most basic framework, and this material can be found in an appendix. We characterize Pareto optimality in Section 3, with examples. A competitive equilibrium is a special Pareto optimal sharing rule, which we treat in Section 4, also with examples.
In Section 5, we work with syndicates, where a group of individuals must make a common decision under uncertainty that will result in a payoff to be shared jointly among the members. Here we present conditions under which the syndicate behaves as a Savage rational decision maker. This was first treated in [1] as an extension of the general model by [2].
In this section we review conditions under which any member of the syndicate can be relegated the task of making decisions under certainty on behalf of the group. This problem is considered both when the members of the group have homogeneous beliefs, and when the probabilities are heterogeneous. Interestingly enough, even in the latter case it is possible to find some common ground.
An illustration is given of unanimity within a group via an example of optimal diversification, where some classical results are recovered. Existence of an evaluation measure of a syndicate is of importance in this theory.
The framework may be used to study risk sharing at various levels in society, like the risk sharing problem in mutual insurance companies, reinsurance markets, or at the state level of a given nation, and also between nations via international organizations.
The paper ends with an application of the theory of syndicates to financial markets and general equilibrium.

2. The Basic Risk-Exchange Model

In this section we study the following basic one period model having two time points, 0 and 1. Let N = { 1 , 2 , , N } be a group of N agents having preferences i over a suitable set of random variables, or gambles with realizations (outcomes) in some subset A , i = 1 , 2 , , N . Here = ( , ) . This preference relation is a binary relation on the relevant set of probability distributions that is transitive and complete, and that satisfies the von Neumann–Morgenstern axioms (the substitution axiom and the continuity axiom).
The model is rather general, and the agents could be insurers, reinsurers, consumers, or even countries or regions. For convenience, we shall phrase the model in terms of individuals, consumers, or insurers.
The preferences are represented by expected utility, meaning that there is a set of continuous utility functions (indices) u i : , such that x i y if and only if E u i ( x ) E u i ( y ) , where x and y are random variables signifying final consumption, or final portfolios. The reader may recognize this as a hybrid of the Savage approach and the von Neumann–Morgenstern theory. Viewed in light of interpretations in a business world, however, it is in the spirit of the Savage theory provided all the agents have the same (subjective) probability measure P. This is illustrated later.
We assume monotonic preferences, and risk aversion, so that, granted enough smoothness, we have u i ( w ) > 0 , u i ( w ) 0 for all w in the relevant domains (note that the concepts of monotonicity and risk aversion make perfect sense without assuming the existence of these derivatives). Each agent is endowed with a random payoff x i , a random variable at time 0, called an initial endowment (or portfolio). More precisely, there exists a probability space ( Ω , F , P ) such that agent i is entitled to payoff x i ( ω ) at time 1 if ω Ω occurs, i N . This means that uncertainty is objective and external, and all the uncertainty is revealed at time 1. There is no informational asymmetry. All parties agree upon ( Ω , F , P ) as the probabilistic description of the stochastic environment, the latter being unaffected by their actions. It will be convenient to posit that both expected values and variances exist for all these initial portfolios, which means that all x i L 2 ( Ω , F , P ) , or just x i L 2 for short.
We suppose the agents can negotiate any affordable contracts among themselves, resulting in a new set of random variables y i , i I , representing the possible final consumption to the different members of the group, or final portfolios (in a one-period model final consumption equals final wealth).
In the equilibrium version of our model, transactions are carried out right away at “market prices”, where Π ( c ) represents the market price for any c L 2 , i.e., it signifies the group’s valuation of the random variable c relative to the other random variables in L 2 at time 0. Notice that this trade, or risk exchange, takes place at time 0, and results in the random final consumption bundles y i , i N , also at time 0. Then if ω Ω happens, the consumption of agent i is y i ( ω ) , here a real number at time 1. The essential objective is then to determine:
(a) Pareto optimal sharing rules.
(b) The market price Π ( c ) of any consumption bundle c L 2 from the set of preferences of the agents and the joint cumulative probability distribution function F ( x 1 , x 2 , , x N ) of the random vector x = ( x 1 , x 2 , , x N ) .
(c) For each i N , the final consumption bundle y i most preferred by agent i among those satisfying their budget constraint Π ( y i ) Π ( x i ) .
In case (a) market prices as in Equation (1) below, are not part of the program.

2.1. Some Basic References

References to this type of problem are plentiful, both in the economic literature, in the actuarial literature, and otherwise scattered around in various journals. The model related to optimal consumption in a world of uncertainty is treated in [3,4,5] in the economics literature. In the actuarial literature we have, for example, [6] as well as the basic treatment by [7,8,9], where the focus is on insurance and reinsurance. In the economics literature, we also have [2,10,11,12], here directed at the economics of insurance. These models have been reviewed and extended in, for example, [13,14,15,16,17]. This type of model has also been extended to a dynamic setting in [18] related to insurance, and there is a large literature in economic dynamics, where elements of this model are central, several of them treated in [19,20].

2.2. The No Arbitrage Requirement

Before we start, some basic facts are in order. First, observe that the possible events F = F x : = σ ( x 1 , x 2 , , x N ) is the sigma-field generated by the initial random variables x, so that any random variable can be written in the form y = f ( x 1 , x 2 , , x N ) for f a suitable Borel-measurable function (this is a result that is known from measure theory, e.g., [21]. This means that for the optimal final portfolios y i = f i ( x 1 , x 2 , , x N ) for some appropriate functions f i . In order to avoid trivialities, we assume that F x is complete, i.e., augmented with all the sets of P-measure zero.
Second, for (b) and (c) above we require that there is no arbitrage. By an arbitrage, or an arbitrage possibility, we mean the possibility of receiving a strictly positive amount at time 1 in some states ω A of positive probability ( P ( A ) > 0 ), without paying anything net at time 0. Alternatively, an arbitrage would also exist if an agent obtains a strictly positive amount at time 0 with no further payments at time 1. In other words, an arbitrage possibility would yield a strictly positive amount with positive probability, either at time 0, or at time 1, or possibly at both time points, and with no payout at any time, i.e., the possibility of receiving something net with no risk. It is hardly surprising that this possibility can not be allowed in a simple and rational model of an insurance or financial market.
Let us assume that market prices exist, denote the market price of any risk y by Π ( y ) , and consider risks in the space L 2 . Then we can show that unless the functional Π ( · ) is linear and strictly positive, there would be arbitrage possibilities.
The following theorem can be shown:
Theorem 1.
There is no arbitrage if and only if there exists a strictly positive random variable π, the state price deflator, representing prices through the relation:
Π ( y ) = E ( y π ) f o r   a l l   y L 2 .
Proof: See Appendix A.
An analogue of the above theorem in financial markets, where only common stocks can be traded, is known as “The Fundamental Theorem of Asset Pricing”. In the finite case, the theorem is due to Steven Ross [22] in the one period framework, and the same reasoning carry over to the dynamic case having a finite number of time periods. There is an extensive literature on the infinite dimensional case, some of which is reviewed in [19].
Readers familiar with the economics of uncertainty will typically be acquainted with the concept of state prices; here π is the Arrow–Debreu state price in units of probability.
My experience is that while this concept is well known to economists, it is lesser known or not quite appreciated by mathematicians. In order to explain its importance, we provide a basic exposition in Appendix B.

3. Pareto Optimality

Next we introduce the concept of (strong) Pareto optimality of an allocation. This is a criterion of the outcome of a negotiations process between individuals that does not depend on the probability distribution of the initial portfolios.
We need the following definition:
Definition 1.
An allocation z = ( z 1 , z 2 , , z N ) is called feasible if:
i = 1 N z i i = 1 N x i : = x M .
The concept of Pareto optimality offers a minimal and uncontroversial test that any social optimal economic outcome should pass. In words, an economic outcome is Pareto optimal if it is impossible to make some individuals better off without making some other individuals worse off. Formally we have:
Definition 2.
A feasible allocation y = ( y 1 , y 2 , , y N ) is called Pareto optimal if there is no feasible allocation z = ( z 1 , z 2 , , z N ) with E u i ( z i ) E u i ( y i ) for all i and with E u j ( z j ) > E u j ( y j ) for some j.
Next we give a few useful results in establishing Pareto optimality.

3.1. The Characterization of a Pareto Optimum

First we focus on the following “representative agent” result:
Theorem 2.
Suppose u i are concave and increasing for all i. Then the allocation y = ( y 1 , y 2 , , y N ) is Pareto optimal if and only if there exists a nonzero vector of agent weights λ + N such that ( y 1 , y 2 , , y N ) solves the problem:
E ( u ( x M | λ ) ) : = sup ( z 1 , , z N ) i = 1 N λ i E u i ( z i ) s u b j e c t   t o i = 1 N z i x M .
In the above u ( · | λ ) , or equivalently u λ ( · ) , is defined as the function, possibly depending on the agent weights λ , satisfying the real optimization problem u : defined by:
u ( x | λ ) = sup z R N i = 1 N λ i u i ( z i ) subject to z 1 + z 2 + + z N x .
This problem is referred to as the sup convolution problem, and the function u ( · | λ ) is called the utility function of the representative agent in economics and finance, or the evaluation measure of the group of individuals in the reinsurance literature and in the theory of syndicates. Below we shall show how it is linked to the Lagrange multiplier associated with problem (3).
Let us first explain the connection between the solutions of problems (2) and (3). In that regard we assume all the random variables are defined on the same probability space ( Ω , F , P ) with generic element ω Ω . A subset A Ω , A F , where P ( A ) = 0 we call a P-null set for short. Problem (3) can be interpreted as a real-valued problem for any given state ω Ω .
Problem (3) can be interpreted as a decision problem in which the group must share a “cake” of size x ( ω ) in order to maximize a weighted sum of the member’s utilities. The proof that the optimal solution ( y 1 , y 2 , , y N ) generated by the sequence of “cake-sharing” programs (3) is the optimal solution of problem (2) can be obtained by contradiction. The reason is the additive nature of utility. Suppose that this is not the case. Then there is a feasible allocation y ^ other than the one generated by sequence (3) that satisfies i = 1 N λ i u i ( y ^ i ( ω ) ) > i = 1 N λ i u i ( y i ( ω ) ) for all ω Ω except for a P-null set. But then, by Fubini’s Theorem, E ( i = 1 N λ i u i ( y ^ i ) ) > E ( i = 1 N λ i u i ( y i ) ) , contradicting the fact that y solves problem (2).
We can now return to the proof of the theorem itself, which can be found in Appendix C.
This basic result gives rise to the following characterization of a Pareto optimum. This result is known as Borch’s Theorem:
Theorem 3.
A Pareto optimum y is characterized by the existence of non-negative agent weights λ 1 , λ 2 , , λ N and the real function u : such that:
λ 1 u 1 ( y 1 ) = λ 2 u 2 ( y 2 ) = = λ N u N ( y N ) : = u ( x M | λ ) a . s .
The proof of Theorem 3 can be found in Appendix C.
First, to be noticed here is that the y i ’s only depend on the marginal utility functions, not on the probability distribution of the random vector ( x 1 , x 2 , , x N ) of the agents’ initial holdings. Thus it is not necessary to know this probability distribution in order to characterize Pareto optimal allocations.
Second, as a by-product we have a characterization of the state price deflator π of the last sections, and this key quantity is connected to the economy via the identity:
π = c u ( x M | λ ) a . s .
where c > 0 is some normalizing constant. We return to this later when we consider a competitive equilibrium.
Alternatively, the above step can be further formalized by a theorem of Zahl [23], who analyzed the part of infinite dimensional analysis where the Lagrange multiplier must be a function of the state variable. In our case, it means that the Lagrange multiplier computed at x M is stochastic, since x M is a random variable.
Next we illustrate by an example. For this we need the following definition: The relative risk aversion is defined by the function R u ( x ) = u ( x ) x u ( x ) for any utility function u. First we consider the case of constant relative risk aversion.
Example 1.
Consider the case of power utility, where u i ( x ) = ( x 1 γ i 1 ) / ( 1 γ i ) for x > 0 , γ i 1 and u i ( x ) = ln ( x ) for x > 0 when γ i = 1 , where the natural logarithm results as a limit when γ i 1 . This example only makes sense in the no-bankruptcy case where x i > 0 P-a.s. for all i. The parameters γ i > 0 are then the relative risk aversions of the agents, which are given by positive constants for this class of preferences.
Consider first the case where γ 1 = γ 2 = = γ N = γ . Here all the marginal utilities are given by u i ( x ) = x γ , and using Theorem 3, the first order condition for Pareto optimality is:
λ i u i ( y i ) = u ( x M ) , a . s . f o r   a l l   i ,
which implies that y i = λ i 1 / γ ( u ( x M ) ) 1 / γ , a.s. Using the market clearing condition x M = i N y i , a.s., which only says that no risk disappears during the process of risk sharing, we obtain:
u ( x M ) = ( i I λ i 1 / γ ) γ x M γ a . s . ,
showing that the marginal utility of the representative agent is of the same type as that of the individual agents. The optimal sharing rules are linear, and given by:
y i = λ i 1 / γ j N λ j 1 / γ x M a . s . f o r   a l l   i N .
From this example, we notice that the optimal allocations depend on the initial ones only through the aggregate x M . Thus we may write y i = y i ( x M ) for all i N . This is quite general, and follows from (2) of Theorem 2, as well as Equation (3) and Equation (4) of Theorem 3.
The linearity of the optimal allocations is lost when the relative risk aversions are allowed to be different for the various agents. This leads to non-linear contracts.
In addition, notice that we have dropped the λ -dependence in the marginal utility function of the representative agent. The explanation for this will come later, but notice that prices are determined modulo a normalization, which can here be taken to be ( i I λ i 1 / γ ) γ . This means that the evaluation measure u does not depend on the sharing rule λ .

3.2. Risk Tolerance and Aggregation

Consider a group of individuals, let us call such a group a syndicate, where the sharing rules are Pareto optimal. Of interest now are two basic and useful results. For this we first define what we mean by the absolute risk aversion of an agent with utility function u. It is given by A ( x ) = u ( x ) u ( x ) , and is strictly positive under our assumptions. The risk tolerance of an agent is defined by ρ ( x ) = 1 / A ( x ) .
The first result relates the Pareto optimal allocations to a solution of a system of ordinary differential equations. It says the following:
The Pareto optimal contracts y i ( x ) as a real function of the real variable x B , satisfies the non-linear, first-order ordinary differential equation:
y i ( x ) = A λ ( x ) A i ( y i ( x ) ) , y i ( x 0 ) = b i , x , x 0 B , i N ,
where A λ ( x ) = u λ ( x ) u λ ( x ) is the absolute risk aversion function of the representative agent, and A i ( y i ( x ) ) = u i ( y i ( x ) ) u i ( y i ( x ) ) is the absolute risk aversion of agent i at the Pareto optimal allocation function y i ( x ) , i N . The notations u λ and A λ have the same meaning as u ( · | λ ) in that the functions u and A may depend on the sharing rule λ . Here y i ( x 0 ) = b i represent the initial conditions of these differential equations, where i = 1 N b i = x 0 .
The second result says that the risk tolerance of the representative agent is the sum or the risk tolerances of the individual members at the Pareto optimal allocations. More precisely,
ρ λ ( x M ) = i N ρ i ( y i ( x M ) ) a . s .
as an equality between random variables. This allows us to rewrite the differential Equations (5) as follows:
d y i ( x ) d x = ρ i ( y i ( x ) ) ρ λ ( x ) , y i ( x 0 ) = b i , x , x 0 B .
We summarize these two results as:
Theorem 4.
(a) The risk tolerance of the syndicate ρ λ ( x M ) equals the sum of the risk tolerances of the individual agents in a Pareto optimum.
(b) The real, Pareto optimal allocation functions y i ( x ) : B , i N satisfy the first order ordinary, nonlinear differential Equations (5), or equivalently, (7).
Since these two results are central in the theory, in Appendix C we present a simple proof.
The result (6) has several interesting interpretations. One is that the syndicate can better carry risk than the individuals can in autarky. For example, a mutual insurance company can be interpreted as a syndicate, where the syndicate members are the customers. This means that the members smooth their individual risks in a pool, and in a Pareto optimum, their individual liability is a real function of the aggregate risk.
By (7) it follows that contracts y i ( x ) are all increasing in x. This yields the mutuality principle. It means that an aggregate wealth increase will affect all members in a positive direction, and a wealth decrease will affect all the members negatively. The direction is the same for all, but how much is individual.
The economic consequences of pandemics may be observed to have these features: It affects most people negatively, but to a varying degree.
A nation, or any international organization like the EU, UN, or Red Cross, can be considered as a syndicate, as can the whole world for that matter, where the members are the inhabitants of the country, or nations of the organization, or in the world, respectively. Since a nation is, by the above result, less risk averse than the individual people that make it up, some projects are better undertaken by the state; they may simply be too grand for individual citizens. Typical infrastructural projects like roads, tunnels, bridges, railways, harbors, airports, museums, etc. are often undertaken by the state. Similar interpretations are valid for organizations, or the world. For example, the climate problems facing earth are too big for any nation to solve alone, which calls for international cooperation.
Pareto optimal contracts are characterized by a continuum of contracts along a Pareto optimal frontier. Consider for example the case N = 2 . The feasible contracts can then be thought of as a bounded section of the first quadrant by a concave curve, the Pareto frontier, from one axis to the other (see Figure 1).
A tangent to the Pareto frontier is characterized by two numbers, the agent weights ( λ 1 , λ 2 ) determining the slope of the tangent, which is λ 1 / λ 2 . Imagine the line with this slope, λ 1 E u ( z 1 ) + λ 2 E ( z 2 ) = c , that cuts through the feasible region. It corresponds to a particular sharing rule. Then move the line with this slope, by varying the constant c, until it is tangency to the Pareto frontier. The point at which this happens corresponds to the Pareto optimum for this particular sharing rule. By varying the weightings, and repeating this process, the Pareto optimal frontier is spanned out.
The Pareto optimal frontier includes contracts that will not be likely outcomes of real negotiations between the parties, since they may not satisfy individual rationality. For N individuals this means that only the section of the Pareto optimal frontier that satisfies:
E u i ( y i ( x M ) ) E u i ( x i ) , i N .
will be rational for all parties. This section is called the core. For N = 2 it is indicated in Figure 1.

3.3. HARA-Utility Functions

The class of utility functions with affine risk tolerances is called the Hyperbolic Absolute Risk Aversion (HARA) class, and plays a special role in this theory. The utility functions in this class can be given by analytic expressions. Recall that with expected utility, if the utility function u ( x ) represents the preference relation, then a u ( x ) + b represents the same preference relation, where a and b are two scalars with a > 0 .
The utility function u ( · ) : such that ρ ( x ) = α + β x > 0 is HARA with coefficients ( α , β ) if and only if there exists scalars a and b such that:
a u ( x ) + b = 1 β 1 ( α + β x ) ( β 1 ) β , if β 0 and β 1 ; ln ( α + x ) , if β = 1 ; α exp ( x α ) , if β = 0 and α > 0 .
When the parameter β = 1 , we have quadratic utility, which is also a member of the HARA class.
In the paper we will be interested in optimal risk sharing, and in order to gain some basic insight, it is an advantage to consider a class of utility functions where agents are allowed to have different preferences, and where optimal sharing rules are affine. Our next example satisfies both these criteria, and is included in the above list.
Example 2.
Consider the case with negative exponential utility functions, with marginal utilities u i ( z ) = e z / ρ i , i N , where ρ i 1 is the absolute risk aversion of agent i, or ρ i is the corresponding risk tolerance. Using the characterization (4), the first order conditions for Pareto optimal sharing rules are:
λ i e y i / ρ i = u ( x M ) , a . s . , i N .
After taking logarithms in this relation, and summing over i, market clearing implies:
u ( x M ) = e ( K x M ) / ρ , a . s . w h e r e K : = i = 1 N ρ i ln λ i and ρ : = i = 1 N ρ i .
Furthermore, from the same first-order conditions we also obtain that the optimal sharing rules (or portfolios) can be written:
y i ( x M ) = ρ i ρ x M + b i , w h e r e b i = ρ i ln λ i ρ i K ρ , i N .
The “reinsurance contracts”, if we for the moment use this interpretation, involve optimal sharing rules that are affine in x M . Market clearing holds here, since i N y i ( x M ) = 1 and i N b i = 0 .
As we will show in the next section, the result of this example is consistent with theory, since our utility functions belongs to the HARA class with identical cautiousness, or slope ρ ( x ) = 0 .
Contracts of this type are termed proportional reinsurance (the more correct term, affine, is not in industry use). The constants of proportionality ρ i / ρ are simply equal to to each agent’s risk tolerance, measured relative to the group. The more risk tolerant a member is, the larger fraction of the aggregate risk is held.
In order to compensate for the fact that the least risk-averse reinsurer will hold the larger proportion of the market, zero-sum side payments occur between the reinsurers, here represented by the terms b i .
Without these side payments, an agent, with a “small” initial endowment but with a large risk tolerance, would end up with a “large” final endowment, but this would not be consistent with individual rationality, or as we will see below, with the agents’ budget constraints.
This kind of treaty seems common in reinsurance practice, and is, moreover, easy to interpret and understand.
An example with the more general form of HARA utility class is the following:
Example 3.
Consider the class u i ( x ) = 1 β 1 ( α i + β x ) ( β 1 ) β , i N , when β 1 and β 0 . Using first-order characterization (4), we obtain the following:
λ i ( α i + β y i ( x M ) ) 1 β = u ( x M ) , a . s . i N .
Some routine calculations show that the optimal sharing rules are given by:
y i ( x M ) = A i + B i x M
where
A i = λ i β j λ j β and B i = λ i β β j λ j β α α i β .
That is, the sharing rules are affine. Market clearing is seen to hold.
Our general risk sharing model with N agents can also be specialized to two agents, one insurance customer and one insurer. The first to have used the model for this basic problem seems to have been [23].

3.4. Affine Contracts

In this section we formalize what we have demonstrated so far when it comes to optimal risk sharing. Effectively we then rule out non-linear contracts.
Denote as above the absolute risk aversion function of an agent by A ( x ) = u ( x ) u ( x ) , x , and the risk tolerance function by ρ ( x ) = 1 A ( x ) . The relative risk aversion function is defined by R ( x ) = x A ( x ) . Recall, HARA utility means that the risk tolerance functions of the agents are affine, i.e., if ρ i ( x ) = α i + β i x , i N , where the α i and β i are all constants. Then affine sharing rules obtain provided the agents have HARA utility functions with the same cautiousness parameters, i.e., when ρ i ( x ) = β i = β , for all i N . More precisely:
Theorem 5.
The Pareto optimal sharing rules are affine if and only if the risk tolerances are affine with identical cautiousness, i.e., y i ( x ) = A i + B i x for some constants A i , B i , i N , j A j = 0 , j B j = 1 , ⇔ ρ i ( x ) = α i + β x , for some constants α i and β, i N .
A short, and we claim original, proof of this theorem can be found in Appendix C based on the system of differential equations in (7).
There are several important insights when members belong to the class of utility functions considered above, which we return to later.
We next consider the pricing issue in a competitive market, where the concept of an equilibrium is central.

4. Equilibrium

The problem each agent i N is supposed to solve is the following:
sup z i L 2 E u i ( z i ) s u b j e c t   t o π ( z i ) π ( x i ) .
An important issue is, of course, existence (and uniqueness) of solutions to (9). We shall not elaborate on this here, suffice it is to note the following. If:
{ z i L 2 : E u i ( z i ) < , π ( z i ) π ( x i ) }
is bounded (in L 2 -norm), then existence is guaranteed (by i.a., the Banach–Alaogher theorem). In addition, a strictly concave u i suffices for uniqueness.
See [5,6,14,19,24,25,26,27,28,29] for existence and uniqueness of equilibrium (existence of Arrow–Debreu equilibria in infinite-dimensional settings seems to have been first treated in [25].
Definition 3.
A competitive equilibrium is a collection ( Π ; y 1 , y 2 , , y I ) consisting of a price functional Π and a feasible allocation y = ( y 1 , y 2 , , y I ) such that for each i, y i solves the problem (9) and markets clear; i = 1 N y i = i = 1 N x i (market clearing is usually defined by i = 1 N y i i = 1 N x i . Since we have strictly monotonic preferences, equality will result in equilibrium).
We close the system by assuming rational expectations. This means that the market clearing price Π implied by agent behavior is assumed to be the same as the price functional π on which agent decisions are based. The main analytic issue is then the determination of equilibrium price behavior.
In this section we characterize a competitive equilibrium (CE) assuming that it exists. We take it that the initial portfolios are not identically equal to zero, and that a unique equilibrium exists. We also assume quite naturally that Π ( x i ) > 0 for each i. In fact, it seems reasonable that each agent is required to bring to the market an initial “endowment” of positive value (this is of course a weaker requirement than the positivity assumption x i 0 P-a.s. for all i found in consumer theory).
The computation of an equilibrium requires that the joint probability distribution of the initial endowments ( x 1 , x 2 , , x N ) is known. Only relative prices can be determined in equilibrium, modulo a normalizing constant.
In this case we have the following:
Theorem 6.
Suppose the preferences of the agents are strictly monotonic and convex, i.e., u i > 0 and u i 0 for all i N , and assume that a competitive equilibrium exists, where Π ( x i ) > 0 for each i. The equilibrium is then characterized by the existence of positive constants α i , i I , such that for the equilibrium allocation ( y 1 , y 2 , , y I ) :
u i ( y i ) = α i π , a . s . f o r   a l l i I ,
where π is the Riesz representation of the pricing functional Π.
Comparing this result to the first order conditions of a Pareto optimum, Equation (4) of Theorem 3, we notice that the Lagrange multipliers α i are just the reciprocals of the agent weights λ i , α i = 1 / λ i , implying that a competitive equilibrium is, if it exists, Pareto optimal.
We next explain the basics of the proof of this result, which can be found in Appendix C.
In order to illustrate the new feature here, the pricing question, we present a simple example.
Example 4.
Let us return to Example 2, where we now demonstrate how a single Pareto optimal point is picked out by a competitive equilibrium. This is done by determining the ray λ of agent weights.
In order to determine this vector λ = ( λ 1 , , λ N ) , we employ the budget constraints:
E ( y i e ( K x M ) / ρ ) = E ( x i e ( K x M ) / ρ ) , i N ,
which give that:
b i = E { x i e x M / ρ ρ i ρ e M e x M / ρ } E { e x M / ρ } , i N .
Hence, the optimal equilibrium allocations y i are completely determined in terms of the given primitives of the model. The ray λ can also be determined modulo a normalization. Letting K = i = 1 N a i ln λ i denote this normalization, then:
λ i = e b i / ρ i e K / ρ , i N .
If we impose the normalization E { π } = 1 of the state price deflator, we obtain e K / ρ = E { e e M / ρ } , in which case the constants λ are given by:
λ i = e b i / ρ i E { e x M / ρ } , i N .
Through this example, we also discover a “premium principle” in insurance, since market prices are given by:
Π ( z ) = E { z · e x M / ρ } E { e x M / ρ } , f o r   a n y z L 2 .
The pricing rule given by expression (11) is referred to as the “Esscher principle” in actuarial mathematics, but then with the important distinction that the aggregate market index x M in (11) is substituted by the risk z itself. For this latter “principle”, the pricing rule is of course no longer a linear functional, which will, unfortunately, lead to arbitrage possibilities and other anomalies.
A CE picks out a point on the Pareto frontier that satisfies individual rationality, so the competitive equilibrium allocation is located in the core. In the case of the two-agent problem, this point is located in the individual rationality section of the Pareto frontier in the NE-quadrant (see Figure 1).
Recall, that the first order conditions for optimality does not depend on probabilities, but when we employ the budget constraints, probabilities enter.
In [30], risk sharing and contingent premiums is discussed in relation to the UK Covid-19 economic losses.
We next include some results on Pareto optimal risk sharing in groups, where there is a decision to be undertaken by the group. Here we also take a look at the situation where agents can have different probability beliefs.

5. Syndicates I

A syndicate is defined to be a group of individuals who must make a common decision under uncertainty that will result in a payoff to be shared jointly among the members. Let us call the common decision a A , where A is the decision space. We limit ourselves to A , a subset of the real numbers.
The first result we have in mind belongs to group decision theory, or the theory of syndicates. It specifies a payoff to the group g ( a , Z ) , where Z is a random variable with realization z, and g is a real function: g : × , assumed to be a smooth C 2 , 2 -function.
The syndicate is faced with an investment project with payoff function g. In this section we assume that before action a is taken, the syndicate comes together and negotiates a sharing rule, as we have described in the above. Here we only consider sharing rules that do not depend on the decision itself. This means that the members of the syndicate are motivated by income and not by the decision itself.
Individual sharing rules y i ( x , Z ) will accordingly all depend on the payoff x = g ( a , Z ) and the random variable Z, but not on the decision a. Otherwise Pareto optimal sharing rules are found by the same principles as before.
We shall here allow the various members to have individual probability beliefs regarding the random variable Z, represented by the probability density functions f i ( z ) , i = 1 , 2 , , N . We assume the individuals to be Savage rational decision makers satisfying Savage’s 7 axioms ([31]). The probability distributions are defined on the same support, and we assume them to be mutually, absolutely continuous with respect to each other. Suppose h 1 and h 2 represent two arbitrary, random prospects, members of some set F of “acts”, facing agent i, defined on the same probability space. Savage’s expected utility theorem then says:
There exist a utility index u i and a probability distribution p i such that h 1 h 2 if and only if u i ( h 1 ( z ) ) d p i ( z ) Ω u i ( h 2 ( z ) ) d p i ( z ) , i N .
In other words, the preference relation ⪰ defined on the set of random prospects F has a numerical representation not only given by a utility index u i ( x ) , but also by a probability distribution p i ( z ) . In our case, the p i ( z ) corresponds to the cumulative probability distribution function F i ( z ) where the derivative F i ( z ) = f i ( z ) . The strict interpretation of this representation is that all uncertainty is subjective, while in the von Neumann–Morgenstern framework, all uncertainty is objective.
It has been claimed that the Savage interpretation works better in a business world. The following simple example illustrates:
Example 5.
Consider the two following lotteries, called acts by Savage:
h 1 : You win 1000 Euro if the football (soccer) team Barcelona ends among the top three teams in its division, Ecuardorian Serie A, next year, otherwise you get 0.
h 2 : You win 1000 Euro if a fair coin lands heads in 4 consecutive trials, otherwise you get 0.
Suppose you get the choice between h 1 and h 2 , and your utility function is u. You would then calculate:
E u ( h 1 ) = p u ( 1000 ) + ( 1 p ) u ( 0 ) = p .
Here we have normalized so that u ( 1000 ) = 1 and u ( 0 ) = 0 , which we can do in both frameworks. Also, p is the probability of success in lottery h 1 .
Similarly,
E ( h 2 ) = ( 1 2 ) 4 u ( 1000 ) + ( 1 ( 1 2 ) 4 ) u ( 0 ) = 1 16 .
Notice that your choice does not depend on the utility function u. In the von Neumann–Morgenstern framework, this is not a bona fide decision problem, since the choice only depends on probabilities, which are objective and hence known to the decision maker. In the Savage approach however, probabilities are subjective and part of the preference representation, and is accordingly a decision problem.

5.1. Homogeneous Probability Beliefs

When probabilities are the same across the agents, the numerical representation of preferences looks the same as with the von Neumann–Morgenstern interpretation, where the probabilities are considered to be objective, so it can in principal be interpreted either way. However, recall Example 5.
If one happens to be a Bayesian however, one will argue a bit differently. According to [32], if two people have the same priors, and their posteriors for an event A are common knowledge, then these posteriors are equal.
The normal situation is that the weights λ = ( λ 1 , λ 2 . , λ N ) will affect the sharing rules y i ( x M ) , and by this the risk tolerance of the syndicate, via the inequalities within the allocation ( y 1 , y 2 , , y N ) . This is true with or without homogeneous beliefs. However, if the risk tolerances of the individual members are of the HARA type with equal cautiousness parameter that is given by ρ i ( x ) = α i + β x , we obtain from the result (6) that the risk tolerance of the syndicate is given by:
ρ ( x ) = i = 1 N ( α i + β y i ( x , z ) ) = α + β x , z
where α = i = 1 N α i is a constant, and where i = 1 N y i ( x , z ) = x for all z by market clearing. (Recall, no risk disappears after risk sharing; the total risk is, presumably now better distributed among the members so that the ones who can carry more risk, do so at the optimum.) Notice that the risk tolerance of the group does not depend on the weights λ , so we have dropped the superscript on ρ . This also means that the evaluation measure of the syndicate, what we earlier denoted by u ( x | λ ) , does not depend on λ either, so we refer to it as u ( x ) when this is the case. This means that the inequalities in the wealth distribution does not affect the syndicate’s willingness to risk-taking.
Because of this fact, we limit ourselves for the moment to the case of HARA-utility functions u i ( x ) for the members of the group, where the risk tolerances are all affine with the same cautiousness β = ρ i ( x ) , i = 1 , 2 , , N . Here y i ( x , z ) = y i ( x ) for all i N when probability beliefs are homogeneous.
In this situation, we define the ‘derived’ utility v i of member i as follows:
v i ( x ) : = u i ( y i ( x ) ) , i = 1 , 2 , , N .
This means that once the group has been established and the members have agreed upon a Pareto optimal sharing rule y = y ( x ) , we consider the individual’s utility function after this sharing rule has been implemented, as a function of the aggregate wealth x. Since there is an element of optimization behind this construction, we call this the derived utility functions of the members (or, perhaps, the indirect utility functions). We can then show the following:
Theorem 7.
Let the members of the group all have affine risk tolerances with the same cautiousness. Then the risk tolerance of any member’s derived utility functions v i , i N , is the same as the risk tolerance of the syndicate.
As a consequence of this result, any member of such a group can be given the task of making decisions under uncertainty on behalf of the group. There is unanimity on the management of risk followed by the planner. Such a group is called an unanimous syndicate, which we formalize later.
Since the result is rather central, in Appendix C we present a proof.
The attitude towards the aggregate risk of each member of the pool is identical, and equal to the one of the central planner, despite the fact that the members have different preferences to start with. These important properties hold only in the case of HARA-utility functions with identical cautiousness β , as will be pointed out later.
This theorem is of course a bit special, but is nevertheless a remarkable result. If the conditions were true in practical life, there would not be much disagreement among us. However recall that the individuals are assumed to be equally well informed and, moreover, have homogeneous probability beliefs.
In other contexts than purely financial, ‘group thinking’ may imply more agreement than is desirable, in that the ability of flexibility and critical thinking may be lost.
With differential information and/or heterogeneous probabilities, results might be different, and, perhaps, more realistic. We shall return to this below.
To illustrate, we consider an example, where the individual members have different preferences of the negative exponential type described earlier, a HARA class of utilities with equal cautiousness across the population.

5.2. Example: Optimal Diversification

Suppose that a partnership (syndicate) has the opportunity to invest its capital of 1 USD in a project with an uncertain return of Z per dollar invested. The partnership can borrow, or lend capital in any amount a at interest rate r, so a payoff g ( a , Z ) = ( 1 + a ) Z a r is available, and the decision problem is to choose an optimal amount of debt, or equivalently, an optimal amount ( 1 + a ) invested in the risky project.
Suppose that all members agree that Z is normally distributed with probability density f Z ( z ) having mean m and variance v, and moreover the individuals have all negative exponential utility functions u i ( x ) = 1 ρ i e x / ρ i , i N , where ρ i are the risk tolerances of the individuals.
Then we know that the syndicate has risk tolerance ρ = i ρ i and evaluation measure u ( x ) = 1 ρ e x / ρ , which does not depend on λ .
Before the decision is made, the members have decided on a Pareto optimal sharing rule, which under our assumptions is given by y i ( x , z ) = y i ( x ) = ρ i ρ x + b i for some zero sum constants b i , as explained in Example 2.
The syndicates decision problem consists in finding the value of a which maximizes:
E u ( g ( a , Z ) ) = u ( g ( a , z ) ) f Z ( z ) d z = ( 1 ρ e 1 ρ ( ( 1 + a ) z a r ) ) 1 2 π v e 1 2 ( z m ) 2 v d z .
The first order condition is both necessary and sufficient for a maximum here, so we solve d d a ( E u ( g ( a , Z ) ) ) = 0 , which has solution:
1 + a * = ρ v ( m r ) .
We show the calculation below.
The interpretation of this expression is:
(i) More is invested in the risky project the larger the risk tolerance ρ of the syndicate.
(ii) More is invested risky the larger the “risk premium” ( m r ) .
(iii) Less is invested risky the larger the variance v.
For those familiar with the optimal portfolio selection theory in finance, the above solution has the same basic features. See [33,34,35,36]. The two first consider intertemporary models in discrete time, the next uses continuous time with continuous dynamics, while the last one uses continuous time and continuous dynamics with jumps included.
Let us look at the calculations:
d d a ( E u ( g ( a , Z ) ) ) = ( z r ) e 1 ρ ( ( 1 + a ) z a r ) 1 2 π v e 1 2 v ( z m ) 2 d z = 0 .
Utilizing the form of the normal distribution and forming a full square, this is equivalent to:
1 2 π v z e 1 2 v ( z ( m v 1 + a ρ ) ) 2 e ( a r ρ 1 2 v m 2 ) + 1 2 v ( m v + 1 + a ρ ) 2 d z =
r 1 2 π v z e 1 2 v ( y ( m v 1 + a ρ ) ) 2 e ( a r ρ 1 2 v m 2 ) + 1 2 v ( m v + 1 + a ρ ) 2 d z .
Using that the integral of a probability distribution equals 1, and the definition of expected value of a normal variate, we now have after canceling the constants:
m v 1 + a ρ = r ,
which proves our result (14).
Now we come to the more interesting point, the one of agreement in the syndicate. Consider any member of the group, let us say member i. This agent’s decision problem after the syndicate has been formed and a Pareto optimal sharing rule y i ( x ) = ρ i ρ x + b i has been established, is the following:
max a u i ( y i ( x ) ) f X ( x ) d x = max a ( 1 ρ i e 1 ρ i y i ( x ) ) f X ( x ) d x =
max a ( 1 ρ i e 1 ρ i ( ρ i ρ [ ( 1 + a ) z a r ] + b i ) ) f Z ( z ) d z =
max a ( 1 ρ i e b i ρ i e 1 ρ ( ( 1 + a ) z a r ) ) 1 2 π v e 1 2 v ( z m ) 2 d z .
The latter optimization problem is seen to give the same result as the problem of the central planner, since the two different constants, ρ versus ρ i e b i ρ i , multiplying the exponential function, simply cancel in both cases after differentiation, equating the result to 0, and thus do not affect the optimal solution a * .
Hence we have an application of Theorem 7.

5.3. Heterogeneous Probability Beliefs

With different probability functions in the Savage representations, we choose to interpret these distributions as posterior probability distributions, which may be the result from different priors where information may, or may not, be different. Below we shall also make use of the dispersion functions φ i ( z ) = : f i ( z ) / f i ( z ) , assuming f i ( z ) > 0 for all z and for all i.
In a syndicate, the dispersion of the individuals result in a syndicate dispersion at a Pareto optimum, which is a mixture of individual dispersions and the derivatives of the sharing rules with respect to x. Since the latter depend on the utility functions of the members, the resulting probability distribution f ( z ) of the central planner is a mixture of the individual probability distributions and the corresponding utility functions via the sharing rules.
We do not obtain a simple separation between some probability distribution of the syndicate that depend solely on the members probability distributions, and a utility function which is related to a sup-convolution problem. However, we characterize conditions under which we obtain a separation between a resulting probability distribution of the syndicate, f ( z ) , and its utility function u ( x | λ ) , which comprises the evaluation measure, call it u ( x , z | λ ) = u ( x | λ ) f ( z ) of the syndicate. Since the resulting probability density f will also depend on the member’s utility functions through the sharing rules, this brings the theory well into the framework of [31]. Below we follow [1,37,38], where many of the proofs that we omit can be found.
Unlike our treatment in Section 3 and Section 4, we now have both a decision a and a random variable Z that affect the payoff x = g ( a , Z ) . In order to be precise, this calls for some definitions.
By a sharing rule, we mean a set of functions:
y = { y i ( x , z | λ ) ; i N } such that i = 1 N y i ( x , z | λ ) = x , z .
A contract is an ordered pair, ( y , a ) , of a sharing rule y and a decision a. A contract ( y , a ) Pareto dominates another contract ( y ^ , a ^ ) if, for all i:
E i { u i ( y i ( g ( a , Z ) , Z | λ ) ) } E i { u i ( y ^ i ( g ( a ^ , Z ) , Z | λ ) ) }
with strong inequality for at least one i. The operator E i denotes expectation with respect to the density f i ( z ) . A contract is Pareto optimal if there does not exist any Pareto dominating contract. This generates a partial order over contracts. The associated preference order is called the Pareto order of contracts.
A decision a is Pareto-preferred to a decision a ^ if there exists a sharing rule y such that:
E i { u i ( y i ( g ( a , Z ) , Z ) ) } E i { u i ( y ^ i ( g ( a ^ , Z ) , Z ) ) } y ^ and i N ,
where we have dropped the possible λ -dependence for notational convenience. When we wish to take account of the the dependence of the sharing rule on decision a, we write y i ( x , z | a ) .
This defines a partial order on A and is called the Pareto order on A .
A sharing rule y is Pareto optimal for some a A provided there does not exist a y ^ such that:
E i { u i ( y i ( g ( a , Z ) , Z ) ) } E i { u i ( y ^ i ( g ( a , Z ) , Z ) ) } i N ,
with strong inequality for at least one agent j.
Generalizing the results of Section 3, let y be a Pareto optimal sharing rule for a A . Then there exists a vector of agent weights λ ( a ) = ( λ 1 ( a ) , λ 2 ( a ) , , λ N ( a ) ) such that y = ( y 1 , y 2 , , y N ) maximizes:
i = 1 N λ i ( a ) E i { u i ( y i [ g ( a , Z ) , Z ) | a ] } subject to i = 1 N y i ( x , z | a ) = x ( x , z | a ) .
Assume that f i ( z ) > 0 for all z and that λ i ( a ) > 0 for all i N . By optimizing the Lagrange function of the above problem, we can show the following:
Theorem 8.
First order necessary and sufficient conditions for Pareto optimality of the sharing rule y is that there exists non-negative weights λ ( a ) = ( λ 1 ( a ) , λ 2 ( a ) , , λ N ( a ) ) and a function μ ( x , z | a ) such that:
( 1 ) i = 1 N y i ( g ( a , z ) , z | a ) = g ( a , z ) , z
and
( 2 ) λ i ( a ) u i ( y i ( g ( a , z ) , z | a ) ) f i ( z ) = μ ( g ( a , z ) , z | a ) , z and i N .
We say that a complete order on A can be derived from an evaluation measure if there exists a function M ( x , z ) such that a is preferred to a ^ if and only if:
M ( g ( a , z ) , z ) d z M ( g ( a ^ , z ) , z ) d z .
We now assume as in Section 5.1 and Section 5.2 that a sharing rule is chosen before decision a is considered, in which case the agent weights λ i do not depend on a. We seek conditions for the existence of an evaluation measure representing group decision processes.
Towards this end we start with a sufficiently rich set A and suppose that the group must choose a Pareto optimal contract. Let ( y , a ) be such a contract. Then y must be a Pareto optimal sharing rule for a, and since this is characterized by the weights λ i , which do not depend on a, y is also Pareto optimal for all a A .
Now, let λ be the weights corresponding to y. Then it must be the case that:
a Argmax α A i = 1 N λ i E i [ u i ( y i ( g ( α , Z ) , Z ) ) ] .
The reason is that the feasible set over α A is convex from our assumption about the structure of A .
We now define:
M ( x , z ) = i = 1 N λ i u i ( y i ( x , z ) ) f i ( z ) .
Then we can reformulate (15) to:
a Argmax α A M ( g ( α , z ) , z ) d z .
We now define an order ⪰ on A by:
a a ^ iff M ( g ( a , z ) , z ) d z M ( g ( a ^ , z ) , z ) d z .
An order ⪰ is said to be Pareto-inclusive if a is Pareto-preferred to a ^ to imply that a a ^ .
Here it can be shown that ⪰ is Pareto inclusive. The order ⪰ selects the same optimal decision as the group chooses.
We now assume that an evaluation measure M ( x , z ) exists independent of a. Then by the use of Theorem 8 notice that:
M ( x , z ) x = i = 1 N λ i u i ( y i ( x , z ) ) y i ( x , z ) x f i ( z ) = μ ( x , z ) i = 1 N y i ( x , z ) x = μ ( x , z ) .
In other words, the Lagrange multiplier μ ( x , z ) is the marginal change in the evaluation measure with respect to x, just as u ( x ) = μ ( x ) in the proof of Theorem 3. This is also in analogy with the way the state price in Appendix B is connected to the marginal utility of the representative agent via d u ( x | λ ) / d x = π ( x ) , here related to equilibrium.
We have the risk tolerances ρ i ( x ) = u i ( x ) / u i ( x ) of the agents and dispersion functions φ i ( z ) = f i ( z ) / f i ( z ) of the probability distributions, i N . Since φ i ( z ) = d ( ln f i ( z ) ) / d z , we can reconstruct the probability densities from the dispersion functions as follows: f i ( z ) = exp ( z 0 z φ i ( s ) d s ) . We now define the corresponding concepts for the syndicate:
ρ ( x , z ) = μ ( x , z ) μ ( x , z ) / x and φ ( x , z ) = μ ( x , z ) / z μ ( x , z ) .
The following general properties hold for the functions ρ i , ρ , φ i , and φ :
( i ) i ρ i ( y i ( x , z ) ) = ρ ( x , z ) ;
( i i ) y i ( x , z ) x = ρ i ( y i ( x , z ) ) ρ ( x , z ) ;
( i i i ) i y i ( x , z ) x φ i ( z ) = φ ( x , z ) .
Here (i) and (ii) are the extensions of (6) and (7) to the inhomogeneous case, while (iii) is new. It implies that the syndicate’s dispersion function depends on the individuals’ dispersion functions as well as of the sharing rule, which in turn depend on the members’ utility functions.
We give a proof of property (iii) in Appendix C.
Returning to the question mentioned earlier: Given a Pareto optimal sharing rule y, under what conditions, if any, will the syndicate behave as a Savage-rational decision maker in the choice of a?
The answer is that this happens if and only if the evaluation measure u ( x , z ) is separable with respect to x and z, that is, if and only if there exist a function u ( x ) and a probability density f ( z ) such that:
u ( x , z ) = u ( x ) f ( z ) ,
where u ( · ) is strictly concave and strictly increasing and f ( z ) is mutually absolutely continuous with respect to f i ( z ) for all i N and possesses a dispersion function.
As a consequence of (iii) above, the syndicate’s probability density function f ( z ) depends on the members’ probability density functions as well as their utility functions, which is remarkable and emphasizes the Savage (or de Finetti)-style nature of the syndicate.
In this case, the complete preference order on A is generated by:
u ( g ( a , z ) , z ) d z = u ( g ( a , z ) ) f ( z ) d z ,
and the syndicate is said to be a Wilson syndicate (after Robert Wilson, who was awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel in 2020, together with Paul R. Milgrom).
For a Wilson syndicate we notice that:
ρ ( x , z ) = u ( x ) f ( z ) u ( x ) f ( z ) = u ( x ) u ( x )
does not depend on z, and is the syndicates risk tolerance, while the syndicates dispersion function is given by:
φ ( x , z ) = u ( x ) f ( z ) u ( x ) f ( z ) = f ( z ) f ( z ) .
We now need a formal definition: A sharing rule is said to be affine provided y i ( x , z ) x is independent of x for all i N . We can then show the following (see Amershi and Stockenius (1983)):
Theorem 9.
If the members of a syndicate have the same probability assessments, or if the sharing rules are affine, then the syndicate is a Wilson syndicate.
An example may illustrate.
Example 6.
Let u 1 ( x ) = u 2 ( x ) = ln ( x ) for x [ 0 , 1 ] and f 1 ( z ) = 1 , f 2 ( z ) = 2 z for z ( 0 , 1 ) , where N = { 1 , 2 } . Consider the particular sharing rule where λ 1 = λ 2 . The associated Pareto optimal contracts y satisfy:
u 1 ( y 1 ( x , z ) ) f 1 ( z ) = u 2 ( y 2 ( x , z ) ) f 2 ( z ) = μ ( x , z ) .
This means that:
1 y 1 ( x , z ) = 2 y y 2 ( x , z ) = μ ( x , z ) , z ( 0 , 1 ) .
From i y i ( x , z ) = x for all ( z , x ) , it follows that μ ( x , z ) = ( 2 z + 1 ) / x which is separable, hence this is a Wilson syndicate.
The sharing rules are given by y 1 ( x , z ) = x 2 z + 1 , y 2 ( x , z ) = 2 x z 2 z + 1 . The evaluation measure u ( x , z ) = 2 ln ( x ) ( 1 2 ( 2 z + 1 ) ) , from which we can take u ( x ) = 2 ln ( x ) and f ( z ) = 1 2 ( 2 z + 1 ) , a trapezium distribution, which is a mixture of f 1 which is uniform, and f 2 which is triangular, but its final shape is also influenced by u 1 and u 2 .
Here y 1 ( x , z ) x = 1 1 + 2 z and y 2 ( x , z ) x = 2 z 1 + 2 z do not depend on x, so the sharing rules are here linear. According to the result above, this fact is enough to verify that we here have a Wilson syndicate as well.

6. Syndicates II

In part I, we formulated the conditions under which a group of people behave as a Savage-rational decision maker. Except from Section 5.1 and Section 5.2, the theory in part I is relevant for a group of people who use a given Pareto optimal sharing rule. The constructed evaluation measure is valid for this particular weighting λ , with an associate Pareto-inclusive complete preference ordering on A .
In this section, we ask the following question: Under which conditions are the preference orderings generated by different λ -weightings equivalent?
We need the following definition: A sharing rule is said to be determinate provided y i ( x , z ) x does not depend on z for all i N .
A sharing rule is both affine and determinate if y i ( x , z ) x is a constant for all i.
Note that in Example 5 the sharing rule is not determinate.
For what comes next we need the following technical result:
Proposition 1.
μ ( x , z ) λ j = 1 λ j y j ( x , z ) x μ ( x , z ) .
A proof can be found in Appendix C.
Notice that the evaluation measure for a given weighting is determined by the relative sizes of the weights, so the partial derivative of μ ( x , z ) with respect to λ j can be considered a change in the weighting.
By this result, the following can now be shown:
Theorem 10.
The evaluation measure of a syndicate is determined independent, modulo a proportional constant, of the chosen Pareto optimal sharing rule λ if and only if all the sharing rules are affine and determinate.
A natural question is now how to characterize utility functions that guarantee that the Pareto optimal sharing rules are affine and determinate. The first result in this direction is closely related to Theorem 7:
Theorem 11.
Suppose that the probability assessments are all homogeneous. All the Pareto optimal sharing rules are affine and determinate if and only if the member’s utility functions all have linear risk tolerances (HARA) with an identical cautiousness parameter.
In order to connect more directly to the result in Theorem 7, we need the following two definitions:
A group of people is said to be a λ -unanimous syndicate provided for all i, j, a, and a ^ we have:
E i ( u i ( y i ( g ( a , Z ) , Z ) ) ) E i ( u i ( y i ( g ( a ^ , Z ) , Z ) ) )
is equivalent to:
E j ( u j ( y j ( g ( a , Z ) , Z ) ) ) E j ( u j ( y j ( g ( a ^ , Z ) , Z ) ) )
where y is the Pareto optimal sharing rule associated with λ .
A syndicate which is λ -unanimous for every λ is said to be an unanimous syndicate.
In an unanimous syndicate one can delegate the decision making process to any of the syndicate’s members.
We now have two results, which we also cite without proofs:
Theorem 12.
A syndicate is λ-unanimous if and only if the λ-Pareto optimal sharing rule is affine and determinate.
Corollary 1.
A syndicate is unanimous if and only if all the Pareto optimal sharing rules are linear and determinate.
The following summarizes many of the above results in the case when there is agreement with regard to probability assessments:
Theorem 13.
In a syndicate with agreement about the probability distributions, the following are equivalent:
(1) The syndicate is unanimous.
(2) All members of the group have HARA-class utility functions with the same degree of cautiousness.
(3) The evaluation measure of the syndicate is determined independent of λ, or equivalently, the complete preference orders on A generated by the group’s different evaluation measures are all equivalent.
(4) The group’s Pareto ordering on A is complete.
We notice that (1) and (2) is a sharpening of our previous Theorem 7 in Section 5.1 in that we in fact have equivalence between the HARA-class with the same cautiousness and the property of unanimity.
The following can be summarized in the general case:
Theorem 14.
For a syndicate with heterogenous probability beliefs the following are equivalent:
(1) The syndicate is λ-unanimous for some λ.
(2) The group is an unanimous syndicate.
(3) All the members of the group have constant absolute risk aversions.
(4) The preference orders generated by u ( x , z | λ ) are identical for all λ.
(5) The group’s Pareto-order is complete.
This theorem gives an answer to our earlier discussion about agreements in a group. When probability assessments are different between the members of the group, it is more difficult for the group to be a unanimous syndicate: All the members must have negative exponential utility functions, which is more limiting than having general HARA utility with the same cautiousness parameter. Their absolute risk aversions are allowed to be different, but otherwise this class is, of course, somewhat restricted. However, this property constitutes a good example in illustrating optimal risk sharing, as we pointed out in Example 2 of Section 3.3, and actually provides reinsurance treaties that are in daily use in the insurance industry.
Wilson in [1] presented an example of a group decision problem of the same type as we discussed in Section 5.2 with the difference that the members have different probability distributions for the risky asset. It is interesting to notice how the syndicate’s probability assessment is constructed in this case. We focus on this part in his example.

6.1. An Example

We consider the same problem as in Section 5.2 but with heterogeneous probabilities. As before, we have negative exponential utility functions with risk tolerances ρ i , which implies that the syndicate has the same type of utility function with risk tolerance ρ = i ρ i . Here member i has a probability density according to the N ( m i , v i ) distribution, so these are normally distributed with means and variances depending on the individual.
The sharing rules satisfy y i ( x , z ) x = ρ i ρ , i N . Here the syndicate’s probability density happens to be normal with mean m and variance v, where:
m = v i ρ i ρ m i v i ,
and
v 1 = i y i ( x , z ) x v i 1 = i ρ i v i 1 ρ .
Here we have used property (iii) for dispersions in Section 5.3.
Notice how the syndicate’s probability distribution is a mixture of the individual members’ probability distributions and the utility functions of the members, represented by the risk tolerances of the individuals and of the syndicate. This places this theory in the subjective tradition of representing preferences in a non-trivial way.
The computations from now on are similar to our previous calculations, where the optimal investment in the risky asset is given by Equation (14), with m and v as given above.
We also have to show that the result is the same when the task of decision making is left in the hands of an arbitrary member of the syndicate. This goes much as before, but notice that here the sharing rules y i ( x , z ) depend on z and hence on a mixture of all the parameters of the preferences and the probability distributions, but since the sharing rules are also determinate, these dependencies simplify after differentiation with respect to a, and the result becomes as before, in agreement with Property (2) in Theorem 14.
In our model the agents know their own probability distribution as well as all those of the other agents. Accordingly, the model can not be used directly to analyze asymmetric information. However, close variants of the model have been utilized for this topic as well. An example is the model of [39], who analyzes the situation with two agents, a principal and an agent. Only the agent can make the decision, which can not be observed by the principal, which is where the asymmetric information comes in (Bengt Holmstrøm obtained the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel in 2016, together with Oliver Hart. Holmstrøm’s Ph.D. advisor was Robert Wilson). The resulting model was used for situations with moral hazard. A model where the different agents have different priors as well as different information is presented by [40,41]. see also [42].
We can use the theory of syndicates at several levels. At the first level for risk sharing between individuals, the objectives consist in maximizing individual preferences of the members, assuming these can be represented by expected utility. This leads to an evaluation measures for the mutual insurance company, the reinsurance market, the nation, or the international organization, respectively, whatever the focus of the study. This construction allows us to form several levels of syndication.
An example of this is given in [43], where marine insurance is studied at three levels. The preferences of ship owners give rise to the evaluation measures of the various P&I-Club, and the latter give rise to the final evaluation measure of the reinsurance syndicate, where the various clubs obtain their reinsurances. From this, prices in the insurance market will be formed according to the theory.
The situation with different probability beliefs typically gives rise to betting between the agents. We next consider this phenomenon when they do not formally enter a syndicate and agree on a sharing rule, which is the case when the model is interpreted as an Arrow–Debreu equilibrium model.

7. Syndicates, Financial Markets, and General Equilibrium

Consider as in Section 4 an exchange economy under uncertainty with one consumption good. The N agents are characterized by utility functions and associated probability density functions as well as initial endowments, with the same assumptions as above. The uncertainty in the economy is associated with a random vector Z = ( Z 1 , Z 2 , , Z m ) of dimension m say, which affect the market portfolio x ( Z ) = i x i ( Z ) , where x i are the initial endowment of agent i as before. We assume the joint probability distribution of the vector ( x 1 ( Z ) , x 2 ( Z ) , , x N ( Z ) ) is known, although we do not need to specify the functional relationship between x and Z here. We know that any competitive equilibrium is Pareto optimal, and that any Pareto optimum can be implemented as a competitive equilibrium, possibly after a redistribution of the initial endowments (by the First and Second Welfare Theorem).
The extension to a multivariate Z will primarily affect the relationship (iii) in Section 5.3. If we define the marginal dispersions of the individuals and the syndicate as:
φ i ( z k ) = : f i ( z ) z k f i ( z ) , i N and φ ( x , z k ) = μ ( x , z ) z k μ ( x , z ) , k = 1 , 2 , m ,
respectively, then we obtain the following generalization of (iii):
φ ( x , z k ) = i = 1 N φ i ( z k ) y j ( x , z ) x , k = 1 , 2 , , m .
By extending this to higher order partial derivatives, the joint distribution f ( z ) can be determined from the sharing rules and higher order dispersions.
An alternative to this, and a bit outside the theme of the paper, is to determine the univariate marginal distributions from the above equations, and employ copulas to determine the joint distribution. Correlation coefficients are good measures of dependence in the case of joint normality, but in general can copulas give a better picture?
Define the decision a by g ( a , z ) = x ( z ) for any realization z of Z, and consider the set:
A = a | E ( g ( a , Z ) u ( x ( Z ) , Z | λ ) = E ( x ( Z ) u ( x ( Z ) , Z | λ ) ) ) .
The expectation is taken with respect to the joint probability distribution f ( z ) , which is determined as described above.
In an exchange economy, we can interpret a decision a A as a budget-feasible consumption plan for a “pseudo agent” with initial allocation x ( z ) who is confronted with state prices u ( x ( z ) , z | λ ) f ( z ) in state z. Defined this way, the set A is convex and satisfies our earlier conditions.
Based on the syndicate theory, in particular Theorem 8, construction prices u ( x ( z , z | λ ) ) f ( z ) occur in equilibrium if and only if there exists a λ such that a * is the optimal decision in A based on the preference order generated by the evaluation measure M ( x , z | λ ) = i λ i u i ( y i ( x ( z ) , z | λ ) f i ( z ) . The evaluation measure depends in general on the weights λ , where y i ( x ( z ) , z | λ ) is the optimal consumption of agent i, i N .
An economy populated with agents ( u i , f i ) , i = 1 , 2 , , N is said to have the aggregation property if all the equilibria are characterized by the same price functional based on u ( x ( z ) , z ) f ( z ) (modulo a normalization), see [37].
Suppose now that x ( z ) is not a constant as z m varies. Then we say that there is a non-trivial social risk.
Under this assumption, the following result can be proven:
Proposition 2.
The following is equivalent:
(i) The economy has the aggregation property.
(ii) The economy is equivalent to an unanimous syndicate.
A proof can be found in Appendix C.
If x ( z ) = x ¯ for all z m , then μ ( x ( z ) , z | λ ) = u ( x ( z ) | λ ) f ( z ) so the ratio between prices in different states of the world would be:
u ( x ¯ | λ ) f ( z ) u ( x ¯ | λ ) f ( s ) = f ( z ) f ( s )
independent of the sharing rule λ , which is a degenerate case, since this ratio is in general equal to:
u ( x ( z ) | λ ) f ( z ) u ( x ( s ) | λ ) f ( s ) = λ i λ j u i ( y i ( x ( z ) , z ) ) f i ( z ) u j ( y j ( x ( s ) , s ) f j ( s ) , ( i , j )
for any states z and s. This follows from the first order conditions, where the the marginal rates of substitution of consumption in states z and s is the same for all agents i N .
A market constructed this way would typically imply “betting” between agents with different probability beliefs (and has been questioned for this added complexity). To see how a sharing rule will typically look like, consider the market analogue to the syndicate in the example in Section 6.1, where N = 2 . The sharing rules y i ( x ( z ) , z ) , i = 1 , 2 are given by:
y 1 ( x ( z ) , z ) = ρ 1 ρ x ( z ) + ( ρ 1 ln λ 1 ρ 1 K ρ ) + ( j ρ j 1 ) 1 1 2 ( ( z m 1 ) 2 / v 1 + ( z m 2 ) 2 / v 2 ln ( v 1 v 2 ) ) ,
where K = j ρ j ln λ j (see Example 2), and with the following expression for agent 2:
y 2 ( x ( z ) , z ) = ρ 2 ρ x ( z ) + ( ρ 2 ln λ 2 ρ 2 K ρ ) + ( j ρ j 1 ) 1 1 2 ( ( z m 2 ) 2 / v 2 + ( z m 1 ) 2 / v 1 ln ( v 2 v 1 ) ) ,
when m = 1 . Here we have used the system differential equations given in Section 5.3, equation (ii). Notice that y 1 + y 2 = x for all z as the case should be.
According to agent 1’s probability belief, realization z should be close to m 1 . If this happens and v 1 < v 2 , then the z-dependent term (the third term on the right-hand side) in the above expression involves a net transfer from agent 2 to agent 1 at time 1, in which case agent 1 wins the bet.
On the other hand, if z falls close to m 2 and v 2 < v 1 , agent 2 wins.
In the situation of Proposition 1, the optimal sharing rules are linear, in which case we may alternatively think of the model as a stock market economy (in contrast to the more general Arrow–Debreu economy), at least if probability beliefs are homogeneous. Such market sharing rules are automatically linear, since stocks can be bought and sold in proportions only (number of shares times price). Such a market is sometimes said to be effectively complete when probability beliefs are homogeneous, since non-linear contracts are not optimal.
Under certain conditions we may implement a securities market equilibrium with N securities and associated stock prices S at time 0, and portfolio weights θ 1 , θ 2 , , θ N , such that ( θ 1 , θ 2 , , θ N ; S ) has security market equilibrium with the same consumption allocation ( y 1 , y 2 , , y N ) as in the Arrow–Debreu economy (see for example [19] for a definition of such a security market equilibrium). This construction can be carried out when the securities market is complete, but can alternatively result with linear sharing rules as explained above.
In the one-period model, without explicit Brownian motion dynamics, a simple model obtains when the state space is finite. When the number of the risky stock-owned companies, whose vectors of state-contingent payoffs are linearly independent, is at least equal to, or larger than the number of states, the resulting market is complete, recall the last result in Appendix B.1.1 (Appendix B). In our setting, we have an uncountable large state space, since we have probability densities on compacts. This could, perhaps, be related to the theory of so-called “large markets”, which uses contiguity concepts and is beyond the scope of the present article, see for example [44].
Non-linear contracts, like various derivatives and options, do exist and are traded in real life markets, and one may wonder if this can be analyzed within the framework of the present model. Consider for example a contract with payoff y 1 p at time 1, p 2 , still with homogeneous probability beliefs. Its price at time 0 is simply:
Π ( y 1 p ) = y y 1 p ( x ( z ) , z ) u ( x ( z ) , z | λ ) f ( z ) d z ,
provided the integral exists. Such a claim is, moreover, not redundant in this model, and the optimal consumption allocations y i can be attained in a market of stock-owned firms only. The completeness issue is if such a security can be spanned by a portfolio of the given securities.
In general, with the existence of general Arrow–Debreu securities, the finite number of optimal contracts, interpreted as securities in a stock market, is not sufficient to determine the Arrow–Debreu state prices. However, in the present setting, the Arrow–Debreu prices are known from the structure of the model.
If we go beyond the class of HARA-utility functions, we encounter non-linear Pareto optimal contracts. However, the resulting contracts may not come close to the financial instruments that market participants want to trade.
When probability beliefs are inhomogeneous, betting occurs as we have demonstrated above. However, the bets are structured by the model. In the standard model, derivatives are considered as bets on the state of the economy, or on the price of one of more of the given securities or functions of such, but the setting only leaves the agents indifferent between status quo or actually acquiring the derivative, short or long. In the present setting, agents bet actively, more like in the real world, and one may wonder if this added feature could be useful in analyzing derivatives.

8. Conclusions

We reviewed optimal risk sharing among members of various organizations. The starting point was optimal risk sharing in an Arrow–Debreu economy, or equivalently, in a Borch-type reinsurance market. From the results of this model we inferred how risk is optimally distributed between individuals according to their preferences and initial endowments. Not surprisingly, risk aversions play an important role. In addition to employing the criterion of Pareto optimality, we also narrowed the scope to the pricing problem connected with competitive equilibrium within the same framework.
The mutuality principle basically says that when it is blowing from the north, all the trees bend southwards: This principle, we claim, can be connected to the economic consequences of a pandemic; it makes most people worse off, and when it is over, those who survived may improve their situation (although it may take some time).
From this we progressed to a review of the more general theory of syndicates, where, in addition, a group of people is to make a common decision under uncertainty. Conditions were formulated under which the syndicate behaves as a rational Savage decision maker, with an associated evaluation measure.
Depending on the structure of this measure, we can infer the degree of unanimity in the syndicate. Presumably, a theory like this can shed some light on what goes on in world meetings, like for example, the UN Climate Change Conference (COP26) held in Glasgow in November 2021. It basically says that it is hard to agree, which was experienced. Lastly, we also extended the syndicate setting to a competitive market equilibrium.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not relevant.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Proof of Theorem 1.
From functional analysis, it is known that a positive, linear functional on an L p -space is bounded ( 1 p < ), and hence also continuous, in which case we can use the Riesz Representation Theorem and conclude that there exists a unique random variable π L + 2 , where L + 2 is the positive cone of L 2 , such that:
Π ( y ) = E ( y π ) f o r   a l l   y L 2 .
In an economic context, this non-negative random variable, the Riesz Representation, turns out to be the state-price deflator. Other names used for this concept in the literature is pricing kernel, state price density, or marginal rates of substitution, the latter is in a dynamic context.
Finally, the pricing functional π is also strictly positive, meaning simply that π ( Z ) > 0 for any Z > 0 . Here Z > 0 means that Z 0 a.s., and there is some set A such that Z ( ω ) > 0 for all ω A where P ( A ) > 0 . This is a direct consequence of no arbitrage possibilities as well. Thus the Riesz’ representation π is strictly positive a.s.
It should be noticed that if Ω is a finite set, any linear functional is automatically continuous, so we would not need the theorem from functional analysis in order to apply the Riesz Representation Theorem.
In conclusion, we have shown that if there is no arbitrage, then there exists a strictly positive random variable π L + 2 such that prices are given by the relation (A1). On the other hand if prices are given by (A1), there can not be any arbitrage. Suppose z 0 a.s., then π · z 0 a.s. since π > 0 a.s., and accordingly must Π ( z ) 0 . Likewise, if z > 0 , (meaning z 0 a.s., and there exists a set A F having strictly positive probability, where z ( ω ) > 0 for all ω A ) then π · z > 0 since π > 0 a.s., and accordingly must Π ( z ) > 0 . Our proof is then complete. □

Appendix B

Appendix B.1. State Prices

In order to put the basic message forward as simple and direct as possible, we consider a discrete model for this part.
We fix a future point in time, called one period from today, and define a set Ω containing S states of the world, indexed j = 1 , 2 , , S . In other words, the state space is Ω = { ω 1 , ω 2 , , ω S } = { 1 , 2 , , S } for simplicity of notation.
In this one-period setting, we consider an exchange economy populated by N agents with endowments x n , and preference relations over end-of-period consumption n , n = 1 , 2 , , N . Notice that we need no limitations like preferences, as they are represented by expected utility, or otherwise, at this stage.

Appendix B.1.1. A Thought Experiment

Consider the following thought experiment: Imagine these individuals organize a market for trade in contingent claims related to these states j = 1 , 2 , , S . In this market we imagine there will be established prices ψ j for these state contingent claims, i.e.,
ψ j = market value today of 1 unit of account at the end of the period ,
given that state j occurs , j = 1 , 2 , , S .
More precisely, we define a unit contingent claim, called an Arrow–Debreu security, as follows:
1 j = 1 , if state j Ω occurs ; 0 , otherwise .
Then ψ j is the price of 1 j for any state j Ω .
Now we assume that there exists an equilibrium in Arrow–Debreu securities with prices as indicated, a concept we define formally below (see [3,5]. Since this equilibrium is formed by the N agents in this economy, the prices will reflect the characteristics of these agents, including their preferences, their endowments, and the various states that can occur. The details of how this comes about, we need not worry about at the present.
Suppose you have access to such a market and were to consider an investment project y, where the payoff at the end of the period depends on what state j that occurs. Also suppose that you are a price taker, in that your actions will not change the equilibrium prices (if not, see e.g., [29]).
The project may be represented by a set of numbers y j representing the payoff if state j occurs, j = 1 , 2 , , S .
A natural question is then what this project is worth at the beginning of the period. Let us denote this value by Π ( y ) .
The answer is simple: If you owned the project, you could issue y 1 state 1-claims 1 1 , y 2 state 2-claims 1 2 , , y S state S-claims 1 S , and sell these in the market.
Since the prices of the Arrow–Debreu securities are known, the amount from this sale is given by ψ 1 y 1 for the 1 1 claims plus ψ 2 y 2 for the 1 2 claims plus ⋯ plus ψ S y S for the 1 S claims. In total, this sale may bring in j = 1 S ψ j y j , which must then represent the value of the investment project y, i.e.,
Π ( y ) = j = 1 S ψ j y j .
One thing to note about this valuation formula is that it does not explicitly depend on the probabilities of the states. In other words, as long as the state prices ψ j are known, the valuation formula can be set down without really knowing the probabilities, call them p j , of the various states, j = 1 , 2 , , S .

Appendix B.1.2. Equivalent Representations of the Value of a Project

If the probabilities p j are known, we may alternatively express the value of any project y as follows: Assuming all p j > 0 , we can rewrite (A2) as follows:
Π ( y ) = j = 1 S p j ψ j p j y j .
This assumption is innocuous, since if one state has probability 0, this state may safely be removed from consideration. It is of no interest here.
Let us define the Arrow–Debreu state prices in units of probability as the state price deflator:
ψ j p j : = π j = state price deflator in state j , j = 1 , 2 , , S .
The vector π = ( π 1 , π 2 , , π S ) is then the state price deflator. With probabilities in place, we can consider π as a random variable defined on Ω , with value π j if state j occurs.
Using this, we may write the project value today as an expectation with respect to the probabilities p j as follows:
Π ( y ) = j = 1 S p j ( π j y j ) = E ( π y ) .
Here we have considered both π and y as random variables on the same probability space ( Ω , F , P ) , in which case (A3) is merely the definition of the expected value of the product π y of these two random variables. In addition to having the state-price-in-units-of probability interpretation of π , we have also recovered the basic pricing Formula (1) using finite dimensional analysis.
This formula is important in large parts of financial and insurance economics, particularly related to equilibrium theory.
Suppose we have a model of N securities where d n , j is the number of units of account paid by security n in state j. Denote the prices of the N securities at time 0 by the vector q. A portfolio θ N has market value q · θ and payoff d θ S . d is the payoff matrix of the securities, and prime here means transpose. We interpret this as a model of a stock market (where sharing rules are linear). An arbitrage possibility is a portfolio θ N with q · θ 0 and d θ > 0 , or q · θ < 0 and d θ 0 . In this discrete model, the no-arbitrage condition was characterized by [22]. It says that there is no arbitrage if and only if there is a state price vector ψ = ( ψ 1 , ψ 2 , , ψ S ) such that q = d ψ .

Appendix C

Proof of Theorem 2.
Suppose first that ( y 1 , y 2 , , y N ) is a solution to problem (2), and assume, to reach a contradiction, that it is not Pareto optimal. Assume for all that λ i > 0 . Then there exists a Pareto dominating allocation ( y ^ 1 , y ^ 2 , , y ^ N ) satisfying E u i ( y ^ i ) E u i ( y i ) for all i N , and E u j ( y ^ j ) > E u j ( y j ) for at least one j N , where i y ^ i x . But then the feasible allocation y ^ has a larger value of i = 1 N λ i E u i ( y ^ i ) than y, which is a contradiction.
The other direction, that if y is Pareto optimal, then it is a solution to the problem (2), can be proved by use of the Separating Hyperplane Theorem. Since this proof is a bit technical, we skip it here. □
Proof of Theorem 3.
Starting with problem (3), the Lagrangian of the problem is the following:
L ( z ; μ ) = i = 1 N λ i u i ( z i ) μ ( x | λ ) ( i = 1 N z i x )
where the Lagrange multiplier μ ( · | λ ) depends on x. Here we recall the state by state interpretation of the problem (3), in which x ( ω ) is the real number for any given state ω , which means that μ ( x ( ω ) | λ ) is also a real number. This moves the problem to ordinary Lagrange calculus in a deterministic world. (The notation also indicates that μ may depend on the sharing rule λ .) The first order conditions for optimality of y in (A4) imply that for x > 0 ,
λ i u i ( y i ) = μ ( x | λ ) , i = 1 , 2 , , N .
By the Envelope Theorem, we can write μ ( x | λ ) = u ( x | λ ) , where the latter is defined in (3), and u is smooth enough by the Implicit Function Theorem. This implies that, for every i and now interpreted as equalities between random variables, we have that:
λ 1 u 1 ( y 1 ) = λ 2 u 2 ( y 2 ) = = λ N u N ( y N ) : = u ( x M | λ ) a . s .
This proves necessity. As for sufficiency, we note that the first-order conditions are also a sufficient condition under concavity. The fact that the “sup-convolution” function defined by the optimality problem (3) is concave, which is a nice student problem. □
Proof of Theorem 4.
Starting with the first order conditions:
λ i u i ( y i ( x ) ) = u ( x | λ ) , x B , i N ,
from our smoothness assumptions, both sides of the above equation are real, differentiable functions (the right-hand side because of the implicit function theorem), so taking derivatives of both sides gives:
λ i u i ( y i ( x ) ) y i ( x ) = u ( x | λ ) , x B , i N .
Dividing the second equation by the first, we obtain the following non-linear, first order ordinary differential equation for the Pareto optimal allocation function y i ( x ) :
y i ( x ) = A λ ( x ) A i ( y i ( x ) ) , y i ( x 0 ) = b i , x , x 0 B , i N .
Here y i ( x 0 ) = b i represent the initial conditions of these differential equations, where i = 1 I b i = x 0 . This proves (b).
As for (a) notice that i I y i ( x ) = 1 , for any x B , we obtain by summation in (A5) that 1 / A λ = i 1 / A i ( y i ( s ) , or:
ρ λ ( x ) = i I ρ i ( y i ( x ) ) , x B .
This can also be written:
ρ λ ( x M ) = i I ρ i ( y i ( x M ) ) , a . s . ,
interpreted as an equality between random variables. This allows us to rewrite the differential equations in (A5) as in (7) as well. □
Proof of Theorem 5.
Let us go back to the system of nonlinear differential equations for the Pareto optimal contracts in (7), which we repeat here:
d y i ( x ) d x = ρ i ( y i ( x ) ) ρ λ ( x ) , y i ( x 0 ) = b i , x , x 0 B ,
and assume we have HARA-class preferences with identical cautiousness. Then this system of equations can be written as:
d y i ( x ) d x = α i + β y i ( x ) α + β x , y i ( x 0 ) = b i ,
where i = 1 N b i = x 0 , and i = 1 N α i = α .
The solution to this system is given by:
y i ( x ) = α i + β b i α + β x 0 x + α b i α i x 0 α + β x 0 .
This can be readily checked by inserting this solution in Equation (A7), and verifying that the equation and boundary conditions are all satisfied. This system of ordinary, linear differential equations is known to have a unique solution satisfying the boundary conditions, provided the coefficients are continuous in the variable x, so we are done. □
Proof of Theorem 6.
The problem max z i E u i ( z i ) s.t. h ( z i ) 0 , where h ( z i ) : = π ( z i ) π ( x i ) , i = 1 , 2 , , N is an infinite dimensional problem for each i, but a “nice” one in that the objective is concave and the constraint function h (the feasible set) is convex.
For such problems, the Kuhn–Tucker Theorem says that, granted a suitable constraint qualification, any optimal solution y i will be supported by a Lagrange multiplier α i : That is, there exist α i 0 such that the Lagrangian:
L i ( z i ; α i ) = E u i ( z i ) α i h ( z i )
is maximal at z i = y i . Moreover, complementary slackness holds: α i h ( y i ) = 0 . The said qualification could be h ( z i 0 ) < 0 for some z i 0 . (This is the so-called Slater condition.) Here let z i 0 = 1 2 x i .
Next we explore what maximality of L i ( · , α i ) at y i means.
We first argue in terms of directional derivatives: Recall that:
L i ( y i , z ) = lim t 0 + L i ( y i + t z ; α i ) L i ( y i ; α i ) t ,
where L i ( y i , z ) is the directional derivative of L i ( y i ; α i ) in the “direction” z in the Hilbert space L 2 . L i is differentiable at y i means that L i ( y i , z ) exists for all z L 2 , and the functional z L i ( y i , z ) is linear. This functional, the gradient of L i at y i , is denoted by L i ( y i ) . It can here be shown to be given by:
( L i ( y i ) ) ( z ) = E { ( u i ( y i ) α i π ) z } .
A necessary condition for a maximum of L i at y i is that the gradient in (A9) is zero in all directions z L 2 , which leads directly to the condition (10).
Alternatively, we could use a variational argument: For that purpose, define a variation y i ˜ : = y i + t z where y i is the optimal solution of (9), t is a scalar dummy variable, and z L 2 is an arbitrary random variable. According to our conditions, the function f ( t ; z ) : = L i ( y i ˜ ; α i ) attains its maximum for t = 0 , and consequently must:
f ( 0 ; z ) = E { z ( u i ( y i ) α i π ) } = 0 for   all z L 2 ,
which again implies that u i ( y i ) α i π = 0 a.s.
Finally, since u i > 0 for all i, the shadow price π > 0 P-a.s., otherwise the problem (9) can not have a solution, contrary to our assumption that an equilibrium exists. From the first order condition (10), it then follows that α i > 0 of all i. □
Proof of Theorem 7
Consider member i’s attitude towards the syndicate’s risk x, given the sharing rule y i ( x ) . Taking derivatives with respect to x gives:
v i ( x ) = u i ( y i ( x ) ) y i ( x ) = u i ( y i ( x ) ) α i + β y i ( x ) α + β x ,
where we have used the differential equation for y i ( x ) given in (7). Furthermore,
v i ( x ) = u i ( y i ( x ) ) ( α i + β y i ( x ) α + β x ) 2 + u i ( y i ( x ) ) d d x y i ( x ) .
It is straightforward to see by a bit of calculus that d d x y i ( x ) = 0 . From the above we then obtain that the risk tolerance of member i’s derived utility function is given by:
v i ( x ) v i ( x ) = u i ( y i ( x ) ) u i ( y i ( x ) ) α + β x α i + β y i ( x ) = α + β x .
Proof of property (iii) in Section 5.3.
We start with the relation (2) in Theorem 8, which can be written:
λ i u i ( y i ( x , z ) ) f i ( z ) = μ ( x , z ) , i .
Differentiation this relation with respect to y, we get:
λ i u i ( y i ( x , z ) ) f i ( z ) y i ( x , z ) z + λ i u i ( y i ( x , z ) ) f i ( z ) = μ ( x , z ) z , i .
Dividing this equation by the previous one we get:
u i ( y i ( x , z ) ) u i ( y i ( x , z ) ) y i ( x , z ) z + f i ( z ) f i ( z ) = φ ( x , z ) .
From this it follows that:
y i ( x , z ) z = φ i ( z ) ρ i ( y i ( x , z ) ) φ ( x , z ) ρ i ( y i ( x , z ) ) .
Summing over the members we obtain:
i y i ( x , z ) z = z i y i ( x , z ) = z x = 0
= i φ i ( y ) ρ i ( y i ( x , z ) ) φ ( x , z ) ρ ( x , z ) ,
where we have used (i). Accordingly,
φ ( x , z ) = 1 ρ ( x , z ) i φ i ( z ) ρ i ( y i ( x , z ) ) =
i φ i ( z ) ρ i ( y i ( x , z ) ) ρ ( x , z ) = i φ i ( z ) y i ( x , z ) x ,
where we have used (ii). Thus (iii) follows. □
Proof of Proposition 1.
The starting point is the equation:
λ j u j ( y j ( x , z ) ) f j ( z ) = μ ( x , z ) , j , x and z .
By taking partial derivatives and dividing the results by the above equation, this results in:
1 ρ j y j ( x , z ) λ i = μ ( x , z ) λ i μ ( x , z ) , j i
and
1 λ i 1 ρ i y i ( x , z ) λ i = μ ( x , z ) λ i 1 μ ( x , z ) , j = i .
Summing over all the members gives the result. □
Proof of Proposition 2.
(i) implies (ii): The aggregation property means that there exists k ( λ ) > 0 such that:
k ( λ ) μ ( x ( z ) , z | λ ) = u ( x ( z ) ) f ( z ) , z and λ .
The quantity μ ( x , z ) is determined, modulo a constant, for each x and z. From Theorem 12 it follows that all the Pareto optimal sharing rules are affine and determinate. In the inhomogeneous case the result now follows from Theorem 14, in the homogeneous case from Theorem 13. (ii) implies (i): Just reverse the above arguments. □

References

  1. Wilson, R.B. The theory of syndicates. Econometrica 1968, 36, 119–132. [Google Scholar] [CrossRef]
  2. Borch, K.H. Equilibrium in a Reinsurance Market. Econometrica 1962, 30, 424–444. [Google Scholar] [CrossRef]
  3. Arrow, K. Le Rôle des Valeurs Boursières pour la Repartition la Meillure des Risques. Econometrie 1953, 40, 41–47; discussion 47–48. [Google Scholar]
  4. Arrow, K. Essays in the Theory of Risk Bearing; North-Holland: London, UK, 1970. [Google Scholar]
  5. Arrow, K.; Debreu, G. Existence of an Equilibrium for a Competitive Economy. Econometrica 1954, 22, 265–290. [Google Scholar] [CrossRef]
  6. Bühlmann, H.; Jewell, W.S. Optimal risk exchanges. ASTIN Bull. 1979, 10, 243–262. [Google Scholar]
  7. Borch, K.H. The safety loading of reinsurance premiums. Skand. Aktuarietidsskrift 1960, 2, 163–184. [Google Scholar] [CrossRef]
  8. Borch, K.H. Reciprocal Reinsurance Treaties seen as a Two-Person Cooperative Game. Skandinavisk Aktuarietidsskrift 1960, 1, 29–58. [Google Scholar]
  9. Borch, K.H. Reciprocal Reinsurance Treaties. ASTIN Bull. 1960, 1, 170–191. [Google Scholar] [CrossRef] [Green Version]
  10. Borch, K.H. The Economics of Uncertainty; Princeton University Press: Princeton, NJ, USA, 1968. [Google Scholar]
  11. Borch, K.H. General equilibrium in the economics of uncertainty. In Risk and Uncertainty; Borch, K.H., Mossin, J., Eds.; Macmillan: London, UK, 1968. [Google Scholar]
  12. Borch, K.H. Economics of Insurance; Advanced Textbooks in Economics; Aase, K.K., Sandmo, A., Eds.; North Holland: Amsterdam, The Netherlands, 1990. [Google Scholar]
  13. Aase, K.K. Stochastic equilibrium and premiums in insurance. In Approche Actuarielle des Risques Financiers, 1er Colloque International AFIR; AFIR: Paris, France, 1990; pp. 59–79. [Google Scholar]
  14. Aase, K.K. Equilibrium in a reinsurance syndicate; Existence, uniqueness and characterization. ASTIN Bull. 1993, 22, 185–211. [Google Scholar] [CrossRef] [Green Version]
  15. Aase, K.K. Premiums in a dynamic model of a reinsurance market. Scand. Actuar. J. 1993, 2, 134–160. [Google Scholar]
  16. Aase, K.K. Perspectives of risk sharing. Scand. Actuar. J. 2002, 2, 73–128. [Google Scholar] [CrossRef]
  17. Aase, K.K. Existence and uniqueness of equilibrium in a reinsurance syndicate. Astin Bull. 2010, 40, 491–517. [Google Scholar] [CrossRef] [Green Version]
  18. Aase, K.K. Dynamic equilibrium and the structure of premiums in a reinsurance market. Geneva Pap. Risk Insur. Theory 1992, 17, 93–136. [Google Scholar] [CrossRef]
  19. Duffie, D. Dynamic Asset Pricing Theory; Princeton University Press: Princeton, NJ, USA, 2001. [Google Scholar]
  20. Dana, R.-A. Existence and uniqueness of equilibria when preferences are additively separable. Econometrica 1993, 61, 953–958. [Google Scholar] [CrossRef]
  21. Tucker, H.G. A Graduate Course in Probability; Academic Press: New York, NY, USA; London, UK, 1967. [Google Scholar]
  22. Ross, S. A Simple Approach to the Valuation of Risky Streams. J. Bus. 1978, 51, 453–475. [Google Scholar] [CrossRef]
  23. Zahl, S. An allocation problem with applications to operations research and statistics. Oper. Res. 1963, 11, 426–441. [Google Scholar] [CrossRef]
  24. Moffet, D. The risk sharing problem. Geneva Pap. Risk Insur. 1979, 11, 5–13. [Google Scholar]
  25. Bewley, T. Existence of equilibria in economies with infinitely many commodities. J. Econ. Theory 1972, 4, 514–540. [Google Scholar] [CrossRef]
  26. Bühlmann, H. The general economic premium principle. ASTIN Bull. 1984, 14, 13–21. [Google Scholar]
  27. Bühlmann, H. An economic premium principle. ASTIN Bull. 1980, 11, 52–60. [Google Scholar]
  28. Mas-Colell, A.; Zame, W.R. Equilibrium Theory in Infinite Dimensional Spaces. In Handbook of Mathematical Economics; Hildenbrand, W., Sonnenschein, H., Eds.; North Holland: Amsterdam, The Netherlands, 1991; Volume 4, pp. 1835–1898. [Google Scholar]
  29. Bichuch, M.; Feinstein, Z. Endogenous inverse demand functions. arXiv 2021, arXiv:2021.08002. [Google Scholar] [CrossRef]
  30. Assa, H.; Boonen, T.J. Risk-Sharing and Contingent Premia in the Presence of Systematic Risk: The Case Study of the UK COVID-19 Economic Losses. In Pandemics: Insurance and Social Protection. Springer Actuarial; Boado-Penas, M.C., Eisenberg, J., Sahin, S., Eds.; Springer: Cham, Swizerland, 2022. [Google Scholar]
  31. Savage, L.J. The Foundations of Statistics; Wiley: New York, NY, USA; London, UK, 1954. [Google Scholar]
  32. Aumann, R.J. Agreeing to disagree. Ann. Stat. 1976, 4, 1236–1239. [Google Scholar] [CrossRef]
  33. Mossin, J. Optimal Multiperiod Portfolio Policies. J. Bus. 1968, 41, 215–229. [Google Scholar] [CrossRef]
  34. Samuelson, P.A. Lifetime Portfolio Selection by Dynamic Stochastic Programming. Rev. Econ. Stat. 1969, 51, 239–246. [Google Scholar] [CrossRef]
  35. Merton, R.C. Optimum Consumption and Portfolio Rules in a Continuous-Time Model. J. Econ. Theory 1971, 2, 373–413. [Google Scholar] [CrossRef] [Green Version]
  36. Aase, K.K. Optimum portfolio diversification in a general continuous-time model. Stoch. Process. Their Appl. 1984, 18, 81–98. [Google Scholar] [CrossRef] [Green Version]
  37. Rubinstein, M. An aggregation theorem for securities markets. J. Financ. Econ. 1974, 1, 225–244. [Google Scholar] [CrossRef]
  38. Amershi, A.; Stoeckenius, J. The theory of syndicates and linear sharing rules. Econometrica 1983, 51, 1407–1416. [Google Scholar] [CrossRef]
  39. Holmström, B. Moral hazard and observability. Bell J. Econ. 1979, 10, 74–91. [Google Scholar]
  40. Kobayashi, T. Equilibrium contracts for syndicates with differential information. Econometrica 1980, 48, 1635–1665. [Google Scholar] [CrossRef]
  41. Mas-Colell, A. The price equilibrium existence problem in topological vector lattices. Econometrica 1986, 54, 1039–1054. [Google Scholar] [CrossRef]
  42. Wilson, R.B. Information, efficiency, and the core of an economy. Econometrica 1978, 46, 807–816. [Google Scholar] [CrossRef]
  43. Aase, K.K. Equilibrium in marine mutual insurance markets with convex operating costs. J. Risk Insur. 2007, 74, 281–310. [Google Scholar] [CrossRef] [Green Version]
  44. Prigent, J.-L. Weak Convergence of Financial Markets; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
Figure 1. The Pareto frontier for N = 2.
Figure 1. The Pareto frontier for N = 2.
Mathematics 10 00161 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aase, K.K. Optimal Risk Sharing in Society. Mathematics 2022, 10, 161. https://doi.org/10.3390/math10010161

AMA Style

Aase KK. Optimal Risk Sharing in Society. Mathematics. 2022; 10(1):161. https://doi.org/10.3390/math10010161

Chicago/Turabian Style

Aase, Knut K. 2022. "Optimal Risk Sharing in Society" Mathematics 10, no. 1: 161. https://doi.org/10.3390/math10010161

APA Style

Aase, K. K. (2022). Optimal Risk Sharing in Society. Mathematics, 10(1), 161. https://doi.org/10.3390/math10010161

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop