1. Introduction
Bargaining is central to human coexistence in society and is paramount to the making of politics. In all political arenas, actors must bargain in order to achieve their goals. In this process, it is only natural that coalitions emerge in order to advance common agendas and interests. But forming a coalition and maintaining it is frequently an intricate game where strategies, assessments of benefits and costs, incentives, and constraints all play a role in defining the destiny of that coalition.
Precisely due to the pervasiveness of coalitional phenomena and their centrality to politics, coalition formation and breakdown constitute one of the most exciting puzzles in contemporary political science. Almost all regimes display some level of bargaining, leading to the existence of coalitions. Evidently, democracies are the perfect arena where coalitional phenomena are observed, for democratic politics is a domain permeated by disputes and negotiations. Most research in this field has been conducted on democratic polities, attempting to explain various processes and aspects of bargaining, maintenance, and eventual collapses of coalitions.
The epistemological trajectory of this research agenda has been marked by formal modelling. Bargaining is often depicted as a game (or a setting of games) unfolding in various stages, leading scholars to resort to the mathematical tools of models (primarily game-theoretical models) to scrutinise the processes leading to coalition formation and breakdown. But models have been extensively criticised in political science [
1], especially in the context of rational choice theory, and some of these have echoed in the coalition literature, affecting not only their credibility as theoretical and methodological tools, but also a serious assessment of the explanatory power of their outcomes and conclusions.
Since von Neumann and Morgenstern’s [
2] first approaches to game theory in economics, bargaining, and coalitions have been part of the exercise of modelling. In political science, William Riker’s [
3] coalitional spatial model triggered the more systematic development of this research agenda. His minimal-winning coalition conceptual model still remains a landmark in coalition studies, in spite of the criticisms that followed its publication—especially due to the lack of empirical evidence when the model was confronted with the diversity of coalition formations in the real world. The further development of institutional theories allowed for the refinement of Riker’s contributions and unleashed the true potential of coalition theory [
4,
5]. Since then, all sorts of models have been designed to understand different aspects of the complex institutional architecture operating behind the scenes, not to mention the various roles played by agents strategically positioned in the institutional edifice [
6,
7,
8,
9,
10,
11,
12,
13,
14,
15].
Coalition models have triumphed over the criticisms. Classical and contemporary research resorts extensively to formalisations of coalition phenomena, attempting to conceptualise the bargaining process, explain regularities in coalition formation and breakdown, and test hypotheses derived from the models themselves. Nevertheless, a more serious appreciation of the contributions of coalition models is still lacking. Indeed, the various functions played by models are usually ignored, and more often than not political scientists judge a model’s explanatory power solely based on their ability (or lack of it) to explain empirical data. However, not all types of models are designed to be empirically tested via some sort of statistical technique, and even those that can be tested may offer methodological challenges to unadvised empiricists.
Therefore, in this paper I argue that models are designed for different purposes, which are irreducible to the mere data-fit exercise of statistical testing. I propose a taxonomy of formal models that allows us to appreciate the various explanations tailored by coalitional modelling. I build upon the literature on the philosophy of science and coalition theory to architect a three-class taxonomy: conceptual models, whose main focus consists in deducing abstract concepts formalised via mathematical tools; quasi-conceptual models, which attempt to explain observed and unexplained real world regularities; and extrapolative models, which can be tested via a data-fit exercise or by structurally merging the mathematical and statistical models into one single entity.
The article is divided into three sections. In the following one, I present the philosophical debate on modelling, drawing comparative lines with political science. In the end of the section, I propose the taxonomy of formal models. Then, I apply that taxonomy to survey canonical and contemporary studies on coalitions that resort to some sort of modelling. Finally, I discuss some recent trends in the coalition literature, namely on coalition modelling.
2. Modelling Politics: Concepts, Regularities, and Empirical Testing
Models in political science have a long tradition. The first attempts to mathematically represent political phenomena date back to the works on election methods by Marquis de Condorcet and Jean-Charles de Borda in the 17th century, and since then formal theoretical frameworks have only expanded to new research agendas. In each field, models come in different flavours, performing various functions in terms of representing the real world of politics and tailoring explanations about it.
In spite of their pervasiveness in political science and other disciplines, there is no consensus about what exactly models are. Philosophers of science describe models in different, sometimes even conflicting versions: Morrison and Morgan [
16] argue that models are “autonomous agents” that “function as instruments of investigation”; Giere [
17] describes them as “abstract objects constructed in conformity with appropriate general principles and specific conditions”; and Cartwright [
18] asserts that models are rather “experiments in thought about what would happen in a real experiment”. In political science, the issue remains contentious as well: Clarke and Primo [
19] argue that “models are objects (…) particular kinds of objects—maps”, with “limited accuracy”, for models are “partial” and “purposive-relative”; Dowding [
20] affirms that “a model of something is a representation of that thing” and “a good model is isomorphic to that which it represents in the relevant aspects”. Whichever definition is chosen, the central question about models’ raison d’être remains the same, namely whether they can offer invaluable explanations about real world phenomena, and if so, how they can achieve such feat.
In addressing this question, modellers frequently refer to the representational capabilities of models. In their essence, models represent fragments of the world that are deemed quintessential to unravel the explanatory mechanisms underlying certain phenomena [
19,
20,
21]. By no means does a model aims to represent the totality of the real world, for this would rather be a description instead of a mechanism-based explanation—not to mention that completely representing reality would be unattainable [
22]. By representing some parts of the world, a model illuminates the important aspects of a given phenomenon and the intricate relationships that generate certain outcomes. A modeller’s interests may lie on the outcomes themselves, which may be directly observed in the real world; but she may also be interested in the web of concepts that underlie a given phenomenon or even in the empirical regularities observed in reality (I shall address this issue below). In both cases, her task in modelling remains the same: she has to represent slices of the world in her models, either at the conceptual level or empirical level.
The representational character of models is frequently the main reason for the persistent and contentious debate among philosophers, and social scientists alike. Generally speaking, they are often preoccupied with a model’s ability to represent the world in ways that are true to reality. This is a matter of dispute, with some arguing that models are autonomous with respect to reality, for they mix bits of theories and data via mathematical formalisms and a metaphor [
16,
23]; whereas others claim that the assumptions entailed in models are rarely observed in the real world, meaning that the “lessons” they teach by no means should be treated with a high degree of truthfulness [
18,
24,
25]. As a third path to understand this conundrum, Giere [
26] suggests that models are abstract constructs constituted of principles that act as general templates of representation, which in turn are put to work together in order to generate explanations about a concrete context (or, in more technical language, a target situation). According to him, models “are designed so that elements of the model can be identified with features of the real world” [
17], i.e., the similarities shared by model and reality are responsible for the former’s explanatory power. In a similar vein, Dowding [
20] argues that models are to be judged by how powerful their explanations are regarding the empirical evidence vis-à-vis the slices of the world represented within the model. As Sugden states:
The model is a self-contained construct, which can be interpreted as a description of an imaginary but credible world. The workings of the model generate patterns in the model world that are similar to ones that can be observed in the real world. The model provides an explanation of the world by virtue of an inductive inference: roughly, from the similarity of effects we infer a similarity of causes [
27].
Models in political science attempt to represent the world in a similar fashion. By no means can they offer a complete picture of a given phenomenon, for reality is usually complex and intricate. The designing of a model is based on picking the elements of the real world that may be capable of unravelling the mechanisms that generate political outcomes. Judging the validity of these theoretical choices in regards to which elements of reality should be represented into the model depends primarily on the purposes the model itself serves. Yet, this is also a disputed matter within political science, where different and sometimes conflicting understandings about what models are designed for coexist. Clarke and Primo [
19,
21], for example, classify models into five types (foundational, structural, generative, explicative and predictive), but also subscribe to the maps-analogy of modelling: maps are used to navigate, being objects by themselves but not subject to testing for truthfulness. Johnson [
28], on the other hand, demands for models to be tested through empirical analysis, for he is concerned that many “highly influential formal models often make no prediction whatsoever” [
29]. In spite of this small divergence, they all share the idea that models work primarily as conceptual tools to “navigate the world” without necessarily being capable of generating empirical predictions about political phenomena. Examples of such models one can mention following Johnson [
29] include Arrow’s impossibility theorem, and McKelvey and Schofield’s chaos theorems.
This argument, however, is misleading to the extent that it depicts only one class of models, namely those that are purely abstract and focused ultimately in advancing certain concepts. This type of model plays a crucial role in various fields of political science—such as coalition theory—or they provide the conceptual ground upon which a whole body of theory and empirical applications is built. Nevertheless, to the extent that these concepts allow for the derivation of new models (either purely mathematical or even statistical) and the furthering of research agendas, their concreteness with respect to the fragments of reality they aim to represent cannot be ignored. Much of game theory and set theory is based on this premise, especially in terms of articulating concepts into mathematical expressions that can be used to understand a given phenomenon. Examples of such conceptual models are those mentioned by Johnson [
29], but also the Shapley–Shubik value and Thomas Schelling’s checkerboard model of segregation.
Models may also be designed having in mind the need for explaining certain regularities and patterns observed in data, attempting to generate an overarching, formalised explanation of why the pattern or regularity exists. One can think about the conservation laws of momentum and energy in physics as a pervasive regularity that lacked an explanation until the beginning of the 20th century, when a mathematical model was conceived of to explain these phenomena. Similarly, in political science Bassi’s [
8] model of endogenous government formation, Dewan and Spirling’s [
30] model of collective decisions in Westminster systems, and Giannetti and Sened’s [
26] visual model of party competition and coalition formation in Italian politics attempted to shed light on empirical regularities by designing mathematical models capable of binding concepts and data.
Models can also be tailored to empirically test hypotheses similar to those entailed in traditional quantitative research designs, where a statistical model plays the pivotal role in generating outcomes and potential explanations. In this particular case, the modeller may be interested in separating the formal model from its statistical counterpart, opting to derive hypotheses directly from the corollaries, theorems, lemmas, and propositions of the former, which in turn are tested by the latter. Most tests of formal models fit data into the model in this fashion. Alternatively, modellers can build both models in tandem by deducing the mathematical expressions that bind both formal and statistical models into one single entity. This structurally bound new model respects the principles and assumptions entailed in the formal model when performing the statistical test (which implies the exercise of fitting data into the model). In this way, precious information entailed in the mathematical model (such as uncertainties and other nonlinearities) is preserved, meaning that the outcomes of the model will not suffer from the criticism that the statistical test does not perfectly reflect the mathematically formalised assumptions and outcomes. Curtis Signorino and his collaborators have been extensively working on this kind of maths-stats models [
31,
32,
33].
Table 1 summarises this taxonomy of models in political science.
This taxonomy does not merely reflect the traditional debate about the nature of modelling in political science. Instead, it stresses the various functions models may perform in order to explain political phenomena. The literature on coalition formation and breakdown is permeated with models that advance concepts, explain regularities, and test hypotheses, and all of them do so in different ways. Acknowledging the diversity of approaches to formal modelling in the coalition literature is quintessential to assess not only the concrete explanatory power of models, but also how this research agenda has been evolving since the mid-20th century.
3. Coalition Theory and the Taxonomy of Models
Since the advent of this research agenda, coalition theory has been marked by formal models. John von Neumann and Oskar Morgenstern [
2] set the theoretical foundations of modern game theory, allowing for the development of the first conceptual models of coalition formation [
34]. After all, coalitions can be seen as resulting from various game-theoretical scenarios, each of which focusing on different actors, strategies, processes, and, ultimately, outcomes.
Nevertheless, the process that has culminated in current coalition models began rather modestly, with the works of Lloyd Shapley and Martin Shubik. The Shapley–Shubik value, through three straightforward axioms (symmetry, carrier, and additivity), compute via simple maths the importance of each actor immersed in a coalition game setting [
35,
36]. By defining that the centrality of an actor “depends on the chance he has of being critical to the success of a winning coalition” [
36], not only the authors advanced the concept of pivotal actors in coalition formation, but also the conceptual ground for the fundamental concept of winning coalition, which would be further developed by Riker [
3]. As a model, the Shapley–Shubik value articulated these two central conceptual landmarks in coalition theory, allowing for extensions of coalition models, as well as for the derivation of new testable hypotheses. But as a pioneering model of coalition formation and functioning, it could only offer basic insights: the axioms could not account for the complexity of coalitions, meaning that overarching explanations of the coalition phenomenon could not be tailored with the simple computation of the Shapley–Shubik value. The merit of this conceptual model, however, resides in its capability of setting a research agenda, which has flourished pari passu with the designing and refining of coalition models.
Riker’s [
3] The Theory of Political Coalitions refined the concepts entailed in the Shapley-Shubik model, framing the matter of a winning coalition in terms of the minimal size principle. According to his game-theoretical approach to a n-person coalitional game, the minimal-winning coalition is as large as necessary to ensure victory is obtained, meaning that agents will not build larger coalitions that would entail more transaction costs [
3,
37]. The political process of acquiring votes is intrinsically costly, and as a consequence politicians prefer instead to ensure the necessary number of votes for passing their proposals. This is a conspicuous case of an optimisation problem that requires an optimal solution, i.e., the minimal-winning coalition.
Evidently, Riker’s conceptual model rests on an optimisation principle that would require further investigation in reality. Indeed, when confronted with empirical data, evidence of the explanatory power of his model was mixed. On the one hand, further extensions of the mathematical implications of coalition formation have concurred with Riker’s conceptual insights [
38]. On the other hand, the diversity of coalitions in the real world and the instability of equilibria in n-person settings have posed serious challenges to Riker’s theory [
4,
39], to which he has conceded as a recognition of the chaotic nature of politics [
40]. Nonetheless, these results have rather boosted research on the empirics of coalitions, leading to new attempts of modelling the various regularities observed in the real world. Furthermore, the institutionalist turn in political science has fostered the interest in various aspects of the institutional design that incentivises and constrains coalitions. Some venues of research are the role played by political parties and the formateur [
7,
8,
9,
10], the process of coalition formation and breakdown [
6,
12,
13,
14,
15,
26,
41,
42,
43,
44,
45], and the relationship between ministers and coalitions [
11,
46], just to name a few. In all cases, the blackbox of political institutions has been opened and widely explored in order to investigate the explanatory mechanisms operating in coalition bargaining.
The evidence-based analysis of coalitional games has led to a new generation of formal models. The persistence of certain unexplained regularities—such as the one entailed in Gamson’s law [
38,
47,
48]—has puzzled coalition theorists, bringing to their attention the need for formalising such real-world patterns via mathematical models. Bassi’s [
8] model of endogenous government formation departs from Gamson’s pervasive regularity (which states that the distribution of government portfolios is proportional to each party’s share of seats in the coalition) to explain the role of the formateur, resorting to a game-theoretical approach to the bargaining process. By developing a model to explain a pervasive regularity, Bassi’s bargaining game fits the description of a typical quasi-conceptual model.
Similarly, quasi-conceptual models have been designed to explain other regularities observed in less comprehensive cross-national cases. Dewan and Spirling [
30], for example, develop a quasi-conceptual model (combining spatial models, empirical evidence, and simulations) to analyse an empirical regularity observed in Westminster political systems, namely “a pattern of ‘government-versus-opposition’ roll call voting, whereby government proposals are supported by a cohesive governing majority and opposed by a cohesive opposition minority” [
30]. Giannetti and Sened [
26] follow similar lines in their analysis of Italian political coalitions, but resort instead to graphic representations of two-dimensional spaces in order to evaluate coalitions and make predictions about the outcomes of the bargaining processes. Laver and Benoit [
49] also design a quasi-conceptual typology of coalition government in 29 European democracies, demonstrating that from certain logical assumptions (i.e., mathematical tools) about coalition bargaining one can derive solid conclusions about how coalitions form. Lupia and Strøm [
12] developed a quasi-conceptual model capable of explaining cabinet termination via parties’ preferences over government dissolution, aiming to understand the causes and consequences of such dramatic event through a combination of certain conditions. The Lupia–Strøm model also predicts regularities about the increasing hazard rate experienced by executive coalitions [
15], a finding that, according to the authors, “run counter to the generally untested assumption of a constant hazard rate (…) of cabinet stability” [
12].
In the aforementioned cases, quasi-conceptual models have been designed in order to confer theoretical meaning to empirical regularities observed in the real world. The scope of their explanatory power is dictated by the number of cases and the type of political system they aim to represent, but to the extent that they attempt to expand the theoretical meaning of certain law-like regularities, they also end up contributing to a more comprehensive understanding of coalition bargaining that cuts across other national cases. After all, Gamson’s law (just to take an example) applies to parliamentary and presidential systems alike; it applies to European and Latin American democracies as well. Modelling in quasi-conceptual terms, hence, allows for the direct mediation between the real world and a web of concepts (or theory).
Nevertheless, the bulk of the literature has a conspicuous empirical touch. Most researchers are interested in collecting data about how coalitions form and break down, and test them through statistical models. There is a plethora of empirical studies on coalitions, which in turn impacts formal modelling of various coalitional phenomena. Thus, a substantial deal of the literature attempts to build bridges between formal coalitional models, on the one hand; and statistical tests and models, on the other hand. In doing so, scholars follow two main strategies: either they fit data into the outcomes of a coalitional model through an independent statistical test; or they design a model that derives the mathematical expressions of the statistical test directly from the formal model. The first approach, which is the standard one in political science, results in data-fit extrapolative models; the second approach, which has been systematically developed by Signorino [
31,
32,
33] and his colleagues, produces maths-stats extrapolative models.
Data-fit models are pervasive and the rationale behind them is pretty simple: the modeller develops a formal model, solves for its outcomes (propositions, theorems, lemmas, corollaries etc.), derives hypotheses from the outcomes (H1, H2, …, HN), and test the hypotheses using some statistical model. Take for example Laver and Shepsle’s [
11] seminal work on the allocation of ministries and government formation: the bargaining model unfolds in three stages, starting with a cabinet proposal, which is then assessed by the coalition, and eventually voted in the chamber. The authors’ spatial model rests on the idea of the Strong Party (S), which “participates in every cabinet preferred by a majority to the cabinet in which party S takes all portfolios” [
11]; and on the concept of equilibrium cabinet, which is the ideal point of the bargaining process, for this party can endure in power because no other combination can bring it down. Prior to testing the model itself, Laver and Shepsle run a computational simulation to validate it. Yet, the ultimate goal of their work consists in empirically confronting their mathematical model with real world data.
More recent works follow the same theoretical-methodological formula. Martin and Stevenson [
13], for instance, have ambitiously derived hypotheses from a large pool of coalitional models and tested them via maximum-likelihood estimation. Diermeier and Merlo [
9] and Volden and Carrubba [
45] pick models from more modest samples and perform statistical tests in a similar vein. More recently, Martin and Stevenson [
14] and Becher and Christiansen [
50] use logit regressions to test for the impact of incumbency [
14] and dissolution threats [
50]. Similarly, the literature on cabinet termination has also been characterised by an extrapolative touch. Diemeier and Stevenson [
41] depart from the Lupia–Strøm’s model of cabinet breakdown to design a stochastic, probabilist version combined with data of postwar cabinets of Western democracies. In all cases, authors were interested in the final outcomes generated by the solving of the formal models: none has attempted to structurally bind their statistical model to the assumptions entailed in the mathematical formal model.
This has profound implications for the so-called “testing of a formal model”. There is much debate about whether one can successfully test a model on these lines [
19,
20,
51,
52,
53]. It is true that models mediate between the abstract world of concepts and theories and the real world. It is also fair to say that “good science” has “to tell us how to predict what we can of the world as it comes and how to make the world, where we can, predictable in ways we want it to be” [
51]. The question is how we can ensure that fitting data that have not be collected having a given formal model in mind is the right and desirable procedure for testing the predictions generated by the model. Data are theory-laden [
20], meaning that they have to be theoretically connected to the formal model we aim to test. The theoretical disjunction between data and formal model can only lead to false interpretations and assessments about the explanatory power of the latter.
Some scholars have been tackling this issue by taking a structure-oriented perspective towards formal modelling and empirical testing. Instead of confronting data with the very last stage of models (i.e., their outcomes), these modellers derive the statistical model directly from the formal model. Take for example Signorino and Yilmaz’s [
33] strategic game model, where the authors have used Taylor series to expand the regression equations in order for them to accurately represent the strategic interaction entailed in their analysis. By structurally binding both models (mathematical and statistical), they have accounted for nonlinear phenomena that are often ignored—and, consequently, misspecified—in typical statistical tests. In the coalition theory literature, Ansolabehere et al. [
6] demonstrates that a similar cautionary approach to coalition models testing is paramount, especially in contexts where testing a formal model requires rethinking about how the assumptions entailed in the mathematical model translate into a statistical test. Failing to do so render the tests unsuccessful in terms of validating a certain set of explanations.
Evidently, and in spite of the concerns raised by those working on maths-stats extrapolative models, the data-fitting strategy provides the most recurring formula for testing formal models. Bäck and Dumont [
7] suggest that working with interdependent models is the ideal way of testing the claims of formal coalitional models, but the very complexity of real world phenomena (who makes a decision, at which point in the bargaining process, with which probability) is frequently too high to allow for the designing of a math-stats model. Nonetheless, addressing the theoretical concerns raised by this specific type of modelling may help scholars (either formal modellers or empiricists) in their quest for evermore comprehensive explanations of coalition formation and breakdown.
4. Recent Trends on Coalition Modelling
In the previous section, I surveyed some emblematic coalitional models and classified them according to my taxonomy of formal models. Although I focused primarily on the various ways coalitional models tailor explanations, discussing basic details entailed in their design was inescapable. Precisely these details point out to the diversity of the research agenda on coalition formation and breakdown, demonstrating the importance of different perspectives and approaches to explaining coalition phenomena.
It is now time to turn to more recent developments on this field of inquiry. However, instead of surveying and detailing specific research agendas, I will direct my attention to the new trends on themes and how the taxonomy of formal models may help us understand the future paths taken by scholars conducting their investigations on coalitions.
The bulk of studies still addresses coalitional phenomena in domestic politics and economy. Yet, instead of reproducing the archetypical noncooperative bargaining process, new research has elicited other aspects that have been neglected or poorly explored. Bassi’s [
54] refinement of the model of endogenous government formation paired with empirical evidence from West European countries offers new understandings about minority and surplus governments, a persistent regularity in the real world that puzzles coalition scholars. Her model, although extrapolative to the extent that it attempts to conduct a data-fitting test, displays a quasi-conceptual nature for its goal of solving an empirical problem with a formal theoretical framework. Such interconnections of types of models characterise not only the complexity of coalitional phenomena, but also researchers’ theoretical and methodological innovativeness in respect to how these phenomena should be modelled.
This is why a recent trend on conceptual models has been observed in some works on coalition theory. Approaches are varied, but it is worth mentioning some developments. Hagen et al.’s [
55] model is constructed upon the concept of a cartel game, where agents have to make decisions about cooperating or not in the coalitional bargaining and functioning. By shedding light on the possibilities of cooperation in this type of setting, the authors reframe this branch of analysis and set the technical language that can be further explored in political contexts where the cartel game can be applied.
Another example of this conceptual venue of research is the introduction of externalities to the Shapley value model of coalitional games via stochastic approaches [
56] and methods for sharing externalities [
57]. Further developments have addressed the linearity assumption entailed in the Shapley–Shubik value, adjusting the original conceptual model to incorporate non-linearities [
58]. Departing from Shapley’s concepts, these studies attempt to expand his model by adding the sharing of externalities to the coalition game. Externalities are a pervasive phenomenon in politics and economics, and in the context of coalitions, accounting for them means modelling the benefits reaped as a consequence of being part of a coalition. Various approaches to these particular settings have been proposed since Thrall and Lucas’ (1963) [
59] n-person games with partition functions (a mathematical function that incorporates externalities, either positive or negative, to the game), which demonstrates the relevance of this research venue to the understanding of how coalitions form based on how they share externalities. As innovative they are, these theoretical and methodological refinements of conceptual models attest not only to the continuation of the research agenda, but also reaffirms the value of understanding the different roles played by different types of models.
In terms of themes, analysing peripheral contexts has also become a more recent trend in the literature of coalition models. Typically, studies of Latin American presidential systems have resorted solely to statistical evidence and models [
60,
61,
62], but models that focus on pre-electoral coalition bargaining (such as Carroll and Cox [
63], which was originally tested with evidence collected from West European democracies) may offer invaluable insights for the designing of quasi-conceptual and extrapolative models capable of encompassing the particular reality of Latin American democracies. Borges et al. [
61], for instance, analyse the pre-electoral bargaining process in 18 Latin American democracies, which are characterised by displaying multiparty coalition presidentialism, adding to a body of literature that has been profoundly concerned with the particular institutional setting that culminates in coalitional presidential systems [
64,
65,
66]. Similarly, new approaches to coalition formation and breakdown in multiethnic societies have been emerging in the context of African politics, but the traditional statistical approach remains dominant [
67,
68,
69,
70,
71]. Rather than meaning the triumph of statistics over formal modelling, it opens a window of opportunity for designing models capable of explaining the specific coalitional phenomena of African and Latin American political systems.
Extrapolative models, and, more importantly, statistical modelling characterise another recent trend that attempts to understand the idiosyncrasies of coalitional phenomena at sub-national levels. Some of these studies question the assumptions of coalition formation made at the national level, adding further variables to the analysis, such as incumbency, coattail effect [
65], and the specific institutional architecture (namely, the local party politics and its logic of electoral competition) of sub-national units [
72,
73]. Likewise, researchers have also been interested in the factors leading to coalition termination at sub-national units, shedding light on the implications of party composition and congruence at different levels (national and sub-national governments) [
74].
Evidently, this brief survey only points to some of the various trends in coalition studies. Nevertheless, the renewed interest in refining canonical models by adding other variables of analysis demonstrates the relevance of formal models to this research agenda. This Games Special Issue on “Government and Coalition Formation” is a clear example of how this agenda is moving forward thanks to modellers’ attempts to bring sense to the intricate factors leading to coalition formation, functioning, and collapse. It is only natural that the field will generate evermore accurate and sophisticated explanations about coalitional phenomena, increasingly expanding its explanatory range to other political systems outside of Europe and North America.
5. Conclusions
Coalition theory has played a fundamental role in the heyday of positive political theory, contributing to the expansion of this research agenda in the discipline. Throughout its development, coalition models have enlarged our understanding about the reasons why coalitions form and break down, pointing to different parts of the political system to tailor explanations. If we can now speak about political parties, voting weights, formateur, and bargaining procedures, just to name a few, this is in large extent thanks to coalitional models.
I argued in this paper that formal models in political science serve different purposes and they can all, in their own terms, contribute to the furthering of a research agenda and the building of our conceptual, theoretical, and empirical knowledge. The taxonomy of formal models I presented here is not simply one among others, but rather a case of how models have pride of place in the edifice of the discipline. Understanding the different purposes, they serve in knowledge-building is paramount to rightfully assessing their explanatory power. Furthermore, and as recent trends demonstrate, the refinement of models is directly dependent upon the redesigning of classical models and interpretations of coalitions, and, in this process, different strategies are necessary according to the type of model one is willing to re-engineer.
Coalition bargaining is still a thriving field, where groundbreaking interpretations, concepts, and ultimately models generate invaluable insights about competition, negotiation, cooperation, and a number of processes that culminate in forming a coalition or in its collapsing. In the next decades, this research agenda will face complex challenges in dealing with how political agents respond to increasingly polarised electorates, which directly affects how coalitions react to voters’ preferences and their support of political parties. Nonetheless, coalition modelling is fully equipped to respond to these new developments in political systems thanks to this agenda being on the move. After all, modellers shed light on these fascinating phenomena, which are present in all political systems, more prominently in democratic regimes. As long as bargaining remains a desideratum of collective coexistence, coalitions will form and perish—and modellers will play the fundamental role of giving sense and meaning to the very existence of coalitions.