1. Introduction
Legal and regulatory scholarship—or, at any rate, so I take it—aspires to contribute to the better governance of human communities (
Brownsword, forthcoming). At the best of times, this is an important mission; and, in our highly dangerous times when there is widespread concern about existential threats of one kind or another (see, e.g.,
Wittes and Blum 2015;
Bridle 2018;
Mishra 2018;
Skidelsky 2023), the critical significance of good governance is not to be understated.
Distinctively, the burgeoning literature of ‘law, regulation, and technology’ discusses how we might engage with emerging technologies in ways that improve law’s imperfect governance (see, e.g.,
Hildebrandt 2015;
Brownsword et al. 2017;
Hacker et al. 2019;
De Filippi and Wright 2018;
DiMatteo et al. 2019;
Yeung and Lodge 2019;
Deakin and Markou 2020;
Fairfield 2021;
Clifford et al. 2023;
Vicente et al. 2023;
Brownsword 2024;
Brozek et al. 2024;
DiMatteo et al. 2024). Here, the focus might be on the application of particular legal rules, classifications, and concepts to novel technological phenomena; or on the fitness of regulatory regimes and the regulatory environment; or on the deployment of new tools (and the acceptability of such deployment) for governance purposes whether by operating in conjunction with legal rules or by assisting legal functionaries; and, most radically, on technological management that takes both rule-direction and human functionaries out of the loop of governance.
Against this background, this article highlights a dilemma that arises when we respond to law’s governance by turning to what promise to be technological improvements and solutions. On the one hand, we are discontent with law but also attached to governance being a human enterprise; on the other hand, we can see potential benefits in technological governance but not without some displacement of the human element. In short, we are torn between a view of governance as efficient but not quintessentially human and governance that is far from perfect but quintessentially human (
Lessig 1999;
Brownsword 2005). Caught on the horns of this dilemma, we attempt to limit the loss of the human element by insisting that governance must be compatible with human rights or human dignity, or, more directly, that governance must limit the applications of technology so that it is human-compatible (
Russell 2020) or remains human-centric.
My discussion of our present dilemma is in five parts. First, the breadth and range of our discontent with law’s governance is sketched. Secondly, the depth of our discontent is analysed. Thirdly, we trace the roots of our discontent to humans, rules, and the plurality of preferences, priorities and values that are characteristic of those who are subject to governance in modern societies. Fourthly, with emerging technologies now on the radar, we find mixed reviews with regard to law’s governance. On the negative side, new technologies amplify and intensify old concerns—concerns about the authority of law, about regulatory effectiveness, and about the legitimacy of law; but, on the positive side, new tools and technologies promise to secure compliance. We also appreciate that emerging technologies present law with a double transitional challenge: first, law’s governance faces challenges in managing the transition from one kind of order to another; and, secondly, law is challenged by the pressure to transition to a more technological form of governance (bearing in mind that the more directly new tools address the roots of our discontent with law’s governance, the less human the enterprise becomes). Fifthly, we focus on that part of the governance spectrum where a more technological approach is adopted. Starting with tools being used in support of rules and humans, we move towards automation that displaces humans from the loops of governance, and then to technological measures that rely on architecture, design, and coding rather than direction by rules to manage those risks that are judged to be unacceptable. However, so long as there continues to be a demand for human-centric applications of technologies, there will be resistance to a more technological approach and there will be questions about how far humans might, and should, go in deploying new tools with a view to improving law’s imperfect governance.
2. The Breadth of Our Discontent with Law’s Governance
Discontent with law’s governance is potentially wide-ranging. For example, we might have a very low opinion of lawyers and legal officials, and, in many places, there will be discontent with the delays, difficulties, and costs associated with access to justice. Indeed, for many citizens, whatever kind of justice law promises to offer, it will be out of reach (
Barton and Bibas 2017).
There might also be a radical discontent with the claims to authority that are made by those who govern, it being said that law’s governance is nothing more than an instrument for protecting and legitimising historic privilege, colonial force, and the interests of the propertied and commercial classes; even the Rule of Law might be rejected as having a ‘dark side’ and as itself being ‘illegal’ (
Mattei and Nader 2008).
In our technological times, there is much more to be said about these things but, at this stage, we will focus on three very common kinds of discontent—namely, discontent relating to the promise of law’s governance, its positions and policies, and its performance.
2.1. The Promise of Law’s Governance
Characteristically, the headline promise in the prospectus for law’s governance is that it will bring with it the Rule of Law; the legal enterprise is committed to governance by rules, to putting an end to both arbitrary rule and to might being right (
Fuller 1969). This headline promise generates the expectation that law’s governance will establish (or restore) and maintain ‘order’ (connoting security, predictability, calculability, consistency, and so on). Law’s governance promises a haven against disorder, whether it is the disorder of a brutal state of nature or of a lawless Wild West.
Moreover, to the extent that the predictable and consistent application of the rules, and the application of sanctions, comes with a fair warning (so that everyone knows where they stand) together with a fair opportunity to comply, this reflects a procedurally just regime. So viewed, law’s governance promises more than crime control, more than dispute settlement, and more than sound administration; it promises systems of criminal justice, civil justice, and administrative justice.
While some might think that this is as much as law’s governance should promise, others will want more from law. In particular, they will want a commitment to democratic practices and they will want more than procedural justice (because they otherwise have no protection against rules that, although consistently applied, are themselves unfair or violate human rights and human dignity and the like). However, if the promise is revised to meet these demands, others will worry that there will be tensions that law cannot manage.
2.2. Law’s Positions and Policies
Where the context for law’s governance is stable, there might be some different views about the positions and policies reflected in the initial code, but there will not be pressure for constant review and revision of those positions and policies. However, where the context is more dynamic, where the members of the community are more heterogeneous in their preferences, plans, and priorities, and where there is an expectation of democratic processes, we have a recipe for discontent. Whichever positions law’s governance takes or whichever policies it adopts, it will not satisfy everyone.
Faced with policy choices to be made in every sector and with a diversity of views as to the preferred choices, a tolerant ‘liberalism’ might seem like the least provocative approach for law’s governance. However, on economic questions, liberal policies provoke the discontent of libertarians who think that there is too much governance; and, on social questions, it can provoke the discontent of those who want more governance and less choice for individuals. Discontent with liberalism becomes a proxy for discontent with law’s governance (
Fukuyama 2022).
Not only that, but in representative democracies, where the legislative assembly is made up of a governing party or coalition and the opposition parties, the role of the latter is routinely to register their ‘discontent’ with the positions taken and the policy choices made by the former. While, for professional politicians, being discontent simply goes with the territory of being in opposition, for many of those who are not politicians or legal officials of any kind, we should not forget that discontent with the positions taken by the law might be much more specific, much deeper, and intensely personal.
In any particular community, members might agree about the core of their criminal code; but, this will leave plenty of room for disagreement about whether certain acts are ‘wrong’ and, if so, whether they should be criminalised. Some of these matters will concern important questions of life and death. For instance, is abortion wrong, is suicide wrong, is euthanasia wrong, and should these acts be criminalised? Many matters will concern sexual preferences and lifestyle choices. For instance, is prostitution, homosexuality, or consensual sado-masochism wrong and should they be criminalised? Is gambling, consuming alcohol, recreational drug use, smoking, or walking the dog in the park wrong, and should any of this be criminalised? If we think that it is too easy to get hold of guns or other instruments of killing and wounding, then should their sale or possession be criminalised? And so on, and on, and on (
Newburn and Ward 2022).
2.3. The Performance of Law
We might judge law’s performance relative to its promises; but, if we are not satisfied with those promises, we will judge its performance relative to whatever prospectus we think law’s governance needs to offer. So, in practice, some might focus their discontent on law’s under-performance relative to order; but, equally, it might be failures relative to democratic practices or justice that attract discontent. Here, I will simply say a few words about discontent relative to order. Broadly speaking, we can check this aspect of law’s performance by asking (i) whether legal guidance is clear, coherent, and consistently applied; (ii) whether there is a satisfactory level of compliance; and (iii) whether law’s governance succeeds in achieving its desired outcomes and only those outcomes.
2.3.1. Guidance
There are many reasons why guidance given by rules might be less than satisfactory. If the rules themselves are not clear, or are contradictory, or they are constantly changing, then even those who are trying their very best to be guided by the rules will struggle. Witness, for example, the difficulties created during the pandemic when the details of the restrictions on social gatherings, social distancing, wearing face masks, self-isolating, and so on, were constantly changing. In the UK, it was not only members of the public who were unsure about the latest restrictions; even the Prime Minister, it seems, was confused by his own rules.
In modern legal systems, both professionals and their clients might find it difficult to cope with the welter of regulation. The Rule of Law might be an unqualified good, but regulatory overload—the Rule of too much Law—is a cause for concern. As
Robin Ellison (
2018, p. xii) puts it, although many governments around the world understand that, by over-regulating, they will invite discontent, few have managed ‘to curb their own regulatory enthusiasm.’ Even if over-regulation is not an issue, once the rules are in the hands of legal officials, the guidance given by law’s governance might be further compromised by apparently inconsistent decision-making—for example, by inconsistency in judicial sentencing (
Kahneman et al. 2021, chp. 1).
That said, while the inconsistent application of clear rules will seem particularly egregious, not all rules are clear. Far from it, the problem, and the discontent, might start with the rules themselves which hinge on vague concepts (such as ‘reasonableness’ or ‘good faith’ or ‘exceptional circumstances’) or which explicitly confer a broad discretion on decision-makers. This is particularly a problem where the promise of law’s governance is for a just order or where, anticipating changing circumstances and the need for order to be adjusted, there is a self-conscious attempt to govern with flexible standards rather than rigid rules.
2.3.2. Compliance
Within a national legal system, law’s governance might fail to achieve (what is generally regarded as) a satisfactory level of compliance. Too many rules are broken; and too few rule-breakers are held to account. Essentially, this is a three-sided problem: there is resistance and reaction on the part of those who are governed; there is complicity on the part of those who are responsible for governance; and there is exposure to external interference (
Brownsword and Goodwin 2012).
To start with resistance and reaction, while law’s governance does not operate on Utopian and unrealistic assumptions, it does assume a rational engagement with its signals. On this assumption, non-compliance might occur in three scenarios: (i) where the prudential calculation indicates that non-compliance is the better option; (ii) where the moral calculation indicates that non-compliance is the right thing to do; and (iii) where those who are governed are guided by quite different considerations such as a professional code (that happens to be at odds with the legal requirements) or by the (non-compliant) actions of some reference group (such as their peers or neighbours or workmates). In the first two cases, the risk is that those who think for themselves, whether prudentially or morally, will judge that they ought not to comply—or, as a variation on this theme, they will judge prudentially that they should put themselves out of jeopardy by side-stepping law’s requirements (often then leading to the law having unintended negative consequences). In the third case, those who are governed are not engaged by the law and their orientation is to norms or practices—ranging from ‘enlightened’ to purely self-serving—that are not necessarily aligned with the law’s vision of a well-ordered society.
Then, on the side of those who govern, we have to reckon with complicity. Sadly, corruption can be found in many domains—in sport, in business, as well as in politics—and it can be found worldwide (
Spector 2022). Even where law’s governance is not overcome by a culture of corruption, its effectiveness can be compromised by ‘capture’—by the realities of those who govern being answerable to their constituents or being subject to the influence of their sponsors and supporters.
Capture is more subtle than corruption and it can be less direct. For example, as
Thomas Hazlett (
2017) has convincingly argued, the record of the Federal Communications Commission in allocating radio and television band spectrum has tended to favour the few large incumbents, with ‘technical reasons’ and the ‘public interest’ routinely being put forward as the justifying reasons for (restrictive) licensing decisions. Similarly, a regulatory agency that is responsible for ensuring that medical products are safe for use might find itself at odds with elements in the pharmaceutical industry that prioritise commerce over consumer safety and patient well-being. The industry might try to reshape the agency’s thinking by direct involvement in its activities, or it might apply pressure indirectly by lobbying politicians who control the resources available to the agency (
Mundy 2001).
Completing this three-sided problem, we have external interference. In principle, compliance with law’s governance within a particular zone (such as a nation state) can be undermined by the activities of parties who operate outside that zone. The interference in question might be active and directly conflictual (involving border crossing and disinformation); or, it might be active but not openly conflictual (as where regulatory arbitrage is practised or tax havens are established to attract business); but it can also be passive—for example, the availability of an assisted dying facility in Switzerland puts pressure on local criminal laws. Anticipating a point made later in the article, the emergence of an online marketplace for goods and services has taken external interference to a different level, dramatically altering the regulatory environment as well as creating new vulnerabilities to cybercrime and cyberthreats (
Johnson and Post 1996;
Murray 2006).
2.3.3. Outcomes
Often, law’s governance struggles to achieve its desired outcomes—at best, it might be a ‘damp squib’; at worst, it leads to unintended and undesirable consequences (
Goddard 2022, pp. 17–20). Notoriously, while there might be occasional ‘wins’ in some skirmishes with suppliers of narcotics, the law seems destined to lose its war on recreational drug use. Similarly, groups who are intended to be the beneficiaries of a legal intervention might not actually obtain any benefit, sometimes because a regulatory agency is not adequately resourced, at other times because the burden of enforcing their rights is too great for the intended beneficiaries. The law does not make the situation worse but its attempt to make it better comes to nothing.
We also see, however, many instances of law’s governance being counter-productive. For example, there might be well-intended regulatory initiatives that are designed to protect a certain class of persons who are vulnerable (such as children, tenants, or patients and research participants) but the burdens imposed on business or the professions lead to unintended consequences. As a result, people no longer volunteer to work with children; or, fearing that children might be abused, social services take too many children away from families where they are not actually at risk; landlords pull out of the rented accommodation sector creating a housing shortage; or, doctors and researchers take up other (perfectly lawful) options.
Accordingly, even if law’s governance is ‘by and large effective’ (as jurists often put it), in the sense that the level of compliance is satisfactory, we need to ask whether the overall impact of a particular legal intervention (its full range of impacts and effects) is beneficial.
3. The Depth of Our Discontent
Our discontent with law’s governance is broad, but how deep does it go? In principle, we could map a person’s or a group’s discontent with the law along a number of dimensions, starting with the breadth of discontent and then with its intensity. When people take to the streets in large numbers to make their protests, or when their grievances are headline news, we can surmise that their discontent is pretty intense. Although I do not bring any new empirical or ethnographic findings to the discussion (compare, e.g.,
Fassin 2013), I can say something about the depth of our human interests. This is important in its own right but it also connects to our discontent in the sense that the deeper the human interests that are at stake, the more intense our discontent should be.
Generally, in democratic societies, even where those who govern are pretty good at identifying the breadth and range of concerns, they tend to be less good at relating particular concerns to the depth of human interests. In communities, where fundamental values are recognised and treated as being privileged when they conflict with non-fundamental concerns, it will be understood that the former go deeper than the latter. However, we need to recognise a further level of human interests that goes deeper than all other levels of interests.
At this further level, which relates to the conditions for the possibility of human existence, human agency, and viable human communities, there are three imperatives: first, humans must protect the global commons, respecting the planetary boundaries and its resources lest human existence on Earth is no longer sustainable (
Rockström et al. 2009); secondly, humans must observe the conditions for peaceful co-existence, both between humans in a particular community and between communities; and, thirdly, humans must respect the conditions that support their agency and autonomy.
Once we have explicitly identified this deepest level of human interest (the interest in the generic conditions for viable human communities), and once we bring this together with the human interests in fundamental community values and in non-fundamental values, we can translate what we have into a three-level scheme of governance responsibilities.
At the deepest level, those who have governance responsibilities must act as stewards for the infrastructural conditions that enable humans to exist on Earth and form their own communities there. These responsibilities apply to human governance as such and they are non-negotiable. By contrast, the responsibilities at the other levels are contingent, depending on the fundamental values and the interests recognised in each particular community.
Within each particular community, those who govern have a responsibility to ensure that the constitutive values of their particular community are respected. At this level, we can have a plurality of views both between communities and within a particular community where the distinctive values are contested. For example, while one community might be committed to liberal human rights and human dignity, another might be committed to conservative dignitarian principles; and, in each community, there might be more than one view about the interpretation or application of these distinctive values (
Beyleveld and Brownsword 2001).
Finally, within each community, there will be many debates about questions that do not implicate either the global imperatives or the community’s particular fundamental values. Judgments about benefits and risks, and about the distribution of benefits and risks, might be varied and conflictual. Inevitably, whether one favours seeking consensus or sharpening difference, dealing with a plurality of competing and conflicting views will be messy.
At this level, the responsibility of regulators is to seek out an acceptable accommodation between the competing and conflicting interests of individuals and groups that are members of the community. Where the issues are both widely and deeply contested, the accommodation will be unlikely to satisfy everyone. There is no right answer as such; and, given a broad margin for ‘acceptable’ accommodation, there are likely to be several regulatory positions that can claim to be reasonable while, conversely, we have no compelling reason to favour one such reasonable accommodation over another (
Brownsword and Wale 2018).
There is much more to be said about this schematic picture of governance responsibilities. In particular, it needs to be emphasised that good governance must build from respect for the global commons. However, having written about this elsewhere (
Brownsword 2019,
2020,
2022,
2023a), I will move on.
4. The Roots of Discontent
Stated shortly, we can identify three root causes of our discontent with law’s governance. These are the fact that law’s governance is a human enterprise; law’s reliance on rules, principles, and standards; and the degree of heterogeneity (plurality) in a community. Given these features, law can give no guarantees: it cannot guarantee that those who are subject to its governance will comply with its rules, even less that all those who are governed will view the rules or other official decisions or policies as aligning with their own particular interests or moral viewpoint.
There is much to be debated about the nature and imperfections of humans. However, for present purposes, let us take it that humans are complex. As the social and cultural psychologist
Jonathan Haidt (
2012, p. 222) observes, humans exhibit a ‘strange mix of selfishness and selflessness’. Humans are not one-dimensional and their behaviour is not entirely predictable. Suffice it to say that, where humans incline towards self-governance and where they default to self-interest (together with being prone to acting on short-term calculation), then, given the opportunity by law’s governance, they might defect from what is required for compliance. What is more, where this is the pattern of human conduct, they will also press for their own preferences and priorities to be recognised—which, in a community that is heterogeneous, will mean that, typically, those who govern will be faced with a plurality of views.
In this diagnosis of discontent, a key factor is the extent of the opportunity for defection that is given by law’s governance. While lawyers are perfectly familiar with the ‘open texture’ and vagueness of the rules, standards, and principles on which law’s governance relies, the more important opportunity arises from the fact that rules dictate only what ought to be (or may be) done, not what can and cannot be done.
For example, recalling Isaac Asimov’s first rule for robots, namely, ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm’, we immediately see room for interpretation (
Pasquale 2020). Should we read ‘injure’ (in the first limb of the rule) as being co-extensive with ‘harm’ (in the second limb of the rule)? If so, what exactly is covered? If not, and if ‘harm’ is broader than ‘injury’, is it not odd to read the rule as giving humans broader protection against inaction by robots than against action by robots? Moreover, the full extent of the affordances becomes apparent if we imagine that a robot has to make a choice between acting to save the life of one human, A, by sacrificing another human, B, and not acting and letting A lose his life. In the light of these affordances, we can agree with
Jacob Turner (
2019, pp. 1–2) that Asimov’s laws suffer from ‘gaps, vagueness and oversimplification’ and that they ‘do not say what a robot should do if it is given contradictory orders by different humans’.
So much, so familiar. Imagine though that this were the first rule for humans. The interpretive affordances would persist but humans might be much less disposed than robots to observe the rule and the rule affords the practical opportunity for non-compliance.
If governance is to reduce non-compliance, it cannot simply leave humans to follow rules. For, the practical affordances of rules mean that, other things being equal, speed limits can be ignored, life can be endangered, contracts can be broken, and property can be stolen, and so on. If we want better governance, then perhaps we should consider new tools and technological solutions.
5. The Promise of New Technologies
When new technologies emerge, new levels and layers of discontent are added to the legacy of discontent that we have with law’s traditional governance. We can start by offering a short overview of these new layers and levels of discontent before turning to the promise (but also the potential perils) of new technologies as tools applied for governance purposes.
5.1. Discontent in Our Technological Times
In principle, new technologies might exacerbate existing discontent with law’s governance by extending the range of the discontent or by amplifying and intensifying existing discontent. Having already experienced a considerable level of technological innovation, from the technologies of the industrial revolution to the burst of new technologies during the present century, we can say with some confidence that discontent will be extended, amplified, and intensified and, moreover, that new reasons for discontent are likely to be provoked.
First, discontent with authority might now become more salient. Strikingly, this is so in relation to the development of new online spaces where ‘cyberlibertarians’ and ‘Internet separatists’ have pushed back against the claims to authority made on behalf of the governance of national legal systems (
Barlow 1996;
Reidenberg 2005). With the development of virtual environments in the Metaverse, we might again find that those who spend time in these new immersive spaces will push for self-governance rather than governance imposed by law (
Reed and Murray 2018;
Brownsword 2023b).
Secondly, with regard to the acceptability of the positions taken by law’s governance, we find that new biotechnologies have awoken some sleeping dogs, notably the ethics of human dignity. As
Mary Warnock (
1993) once pointed out, modern developments in embryology and genetics mean that there are many things that we
can do now but the governance question is whether we
ought to do them. However, where there is both uncertainty surrounding the applications of technology (and their consequences) and a plurality of views, both prudential and moral, then the challenge for law’s governance is heightened. In particular, the idea that we should not utilise technologies that have the prospect of beneficial application but which some view as compromising human dignity will tend to divide rather than unite communities. Similarly, anticipating our present governance dilemma, we can also expect the concern that applications of AI should be human-centric to provoke divisions about how ‘human-centricity’ is to be interpreted. Whatever the position that law’s governance takes up, it will leave some members of the community discontent.
Thirdly, new technologies create a tension between, on the one hand, those members of the community who favour a proactionary and facilitative approach to innovation and, on the other, those who favour a more precautionary restrictive approach that prioritises the management of risk. Already, we can see such a contrast in the different approaches to the governance of AI that are being pioneered in Brussels (where precaution and risk are uppermost) and in London (where a pro-innovation approach is favoured). For those who favour proaction, the lightly regulated development of the Internet supports a facilitative approach but, for those who favour precaution, this example is countered by the experience of lightly regulated drugs prior to the catastrophic use of Thalidomide (
Tutt 2017). Moreover, even the Internet example is less than compelling. For the many problems that are now becoming all too apparent in our information societies suggest that we should have tried harder to anticipate and manage the risks associated with the Internet in a more precautionary manner (
Borghi and Brownsword 2023).
Fourthly, with regard to the effectiveness of law’s performance, new technologies (particularly cybertechnologies) present new tools and new opportunities for crime. In response to various kinds of cybercrime, the governance of national legal systems leaves much room for discontent and, even with significant international and transnational cooperation, we are left with far too many havens for criminality and evasion. National regulators, as we noted earlier, will also find that a global online marketplace dramatically reduces the effectiveness of local restrictions. Connectivity, we might conclude, is a mixed blessing.
Fifthly, a further twist is given if new tools (such as DNA profiling, or other biometrics, AI-enabled profiling, or surveillance and recognition technologies) are deployed to encourage compliance. This is likely to provoke discontent and resistance by civil libertarians and proponents of human rights. In other words, there will be renewed discontent around the trade-offs between effectiveness and legitimacy in the design and operation of the criminal justice system.
Sixthly, no doubt, there will be other forms of discontent, including quite possibly new discontent with legal officials who do, as well as those who do not, make use of the latest (LawTech and RegTech) tools for discharging legal and regulatory functions; and, similarly, there is likely to be discontent with those legal practitioners who do, as well as those who do not, make use of new tools for the delivery of legal services. At the same time, the prospect of a better performing governance by technology will cast a shadow of discontent over law’s governance where it relies on rules and standards which have too many affordances for interpretation and non-compliance.
5.2. Westways
As food for thought about our regulatory options, let me re-tell the tale of governance at the fictitious golf club, Westways (
Brownsword 2020). This, it should be said, is a case of private governance; but, whatever lessons are to be taken from this story, they apply equally to public governance.
The story at Westways begins with the purchase of some golf carts. Initially, the carts are used responsibly. However, as the membership of the golf club changes, there are some incidents of irresponsible cart use and damage to the greens. The club responds by adopting a rule that prohibits taking carts onto the greens and that penalises members who break the rule. Unfortunately, this intervention does not help; indeed, if anything, the new rule aggravates the situation. While the rule is not intended to license the irresponsible use of the carts (on payment of a fine), this is how some members perceive it; and the effect is to weaken the original ‘moral’ pressure to respect the interests of fellow members of the club (compare,
Gneezy and Rustichini 2000).
Taking a further step to discourage breaches of the rule, it is decided to install a few CCTV cameras around the course. Not everyone is happy about this. There are mutterings about a corrosion of trust and, in practice, the cameras prove problematic. First, the camera coverage is patchy, so that it is still relatively easy to break the rule without being seen in some parts of the course. Secondly, because the cameras are directed at the greens, they capture some unwelcome examples of ‘cheating’ by members (who, for example, pinch a few inches when replacing a ball that they have marked). This is not the rule-breaking that the cameras were designed to detect. Thirdly, old Joe who is employed to watch the monitors at the surveillance control centre is easily distracted, and members soon learn that he can be persuaded to turn a blind eye in return for the price of a couple of beers. Given these problems, members decide that the cameras are leading to more harm than good and they vote to remove them. Once again, the club fails to find a way of channelling the conduct of members—a way that is both effective and acceptable—so that the carts are used in a responsible fashion.
At this juncture, the club turns to a technological fix. The carts are modified so that, if a member tries to take a cart too close to one of the greens (or to take the cart off the course), an alert sounds and, if the warnings are ignored, the cart is immobilised. At last, thanks to technological management, the club succeeds in realising the benefits of the carts while also protecting its greens.
With technological management, the governance signals a change into a completely different mode: once the carts are redesigned, it is no longer for members to decide on either moral or prudential grounds to use the carts responsibly; at the end of the story, the carts cannot be driven onto the greens and the signals are entirely to do with what is possible and impossible, with what can and cannot be done. To turn Mary Warnock’s advice around, if we cannot do a particular thing, then the question of whether we ought or ought not to do it does not arise.
While we might read the story at Westways as indicating the promise of technological fixes as more effective ways of undertaking governance, we should also note that the transition to technological management is not all plain sailing. Even if the fix applied to the carts is acceptable to members, the experiment with CCTV cameras reminds us that not all applications of governance by technology will be acceptable. Imagine, for example, how the members might have responded had it been proposed that a robot should be deployed to patrol the greens and programmed to apply lethal force should any rule-breaking be detected.
To what extent Westways should be read as the future foretold is hard to say. Perhaps it should be treated as no more than a footnote caveat to what will prove to be the inexorable rise of technological fixes in place of rules and, with that, to the reduction of discontent with governance; or, perhaps, it cautions against the human pushback where governance by technology itself has provoked discontent. At any rate, the story raises questions about how far we humans will be willing to buy into governance by technology even though it might have some apparent benefits over governance by rules.
6. Human-Centricity and Reservations about Governance by Technology
At Westways, the technological solution is not enabled by state-of-the-art AI but it is the recent leaps forward in AI (with machine learning and now generative AI) that have propelled technologies into the governance spotlight. However, there is clearly a tension between technologies that work best when humans are out of the loop and humans who work best when they are centre stage. The question now is how far we humans might be prepared to go with smart technologically managed environments for our interactions and transactions.
6.1. Human Centricity
In the EU, Article 22 of the General Data Protection Regulation puts down a marker against solely automated decision-making (
Brownsword and Harel 2019) and the High-Level Expert Group on AI has been particularly influential in its insistence on a ‘human-centric’ approach to the governance of AI. The EU sometimes seems to equate the idea of human-centricity with respect for human rights but we might be thinking about more than that. For example, our first thoughts might be that, if limited to human-centric applications, AI would not kill, injure, or otherwise harm humans; AI would not degrade humans, or take them out of the centre of the story (
Kissinger et al. 2022, p. 179); AI would align with human purposes and values; and, humans would remain in control and have the final word on AI applications.
In this context, it is very important to underline the point we made earlier about the deepest level of human interests and the multi-level responsibilities of those who govern. Accordingly, while each community will have overriding reasons to protect its distinctive and identifying fundamental values, all communities will have categorically binding reasons to protect, preserve, and respect the conditions that make it possible for humans to form their own communities and to develop their capacity for agency in their own way. So, whether the governance of AI is tilted towards proaction or precaution in relation to the benefits and risks identified by the community, its first priority is to ensure that the generic conditions for human existence and agency are protected. It follows that we might read human-centricity as going deeper than perhaps even the EU appreciates.
We can explore the way in which human-centricity might bear on three technologically enabled governance scenarios: (i) where AI tools assist humans who have governance functions; (ii) where AI replaces humans who previously performed governance functions; and (iii) where AI is an element of an environment in which the conduct of humans is controlled by technological management.
6.2. AI Assisting Humans
Those humans who have public governance responsibilities in legal systems may be assisted in various ways by AI tools. For example, those who have criminal justice responsibilities, from the police to prison officers, might use AI to advise on their best use of resources, parole boards might be guided by an AI-enabled risk profile of the offender, and judges might be advised on the exercise of their sentencing discretion. But, would such uses be acceptable, would they be compatible with fundamental values, and would they cohere with the maintenance of the generic conditions?
In the case of State of Wisconsin v Loomis, (2016) 881 N.W. 2d 749, the question was whether judicial use of an AI risk assessment tool was consistent with the requirement of due process in criminal cases. The defendant in Loomis denied involvement in a drive-by shooting but pleaded guilty to a couple of less serious charges. Having accepted the plea, the Circuit Court ordered a Presentence Investigation Report to which a COMPAS risk assessment was attached. That assessment showed the defendant as presenting a high risk of recidivism; and the Court duly relied on the assessment along with other sentencing considerations to rule out probation.
As is well known, many questions have been raised about whether apparently colour-blind algorithms of the kind incorporated in tools such as COMPAS do actually involve a racial bias (see, e.g.,
Corbett-Davies et al. 2016). However, responding to the defendant’s appeal on due process grounds, the Wisconsin Supreme Court ruled that, subject to some caveats, the use of AI risk-assessing tools did not violate due process. In particular, the Chief Justice emphasised that, although the Court’s holding ‘permits a sentencing court to
consider COMPAS, we do not conclude that a sentencing court may
rely on COMPAS for the sentence it imposes’ (772). The legitimate function of COMPAS, in other words, is to assist judges, not to replace them.
Granted, we might wonder whether, in practice, judges or other decision-makers will be able to maintain a critical distance from the AI tools that they use (in the context of the private governance of employment, see
O’Neil 2016; and,
Schellmann 2024); but, on the face of it,
Loomis echoes the EU’s concern that applications of AI should be human-centric.
6.3. AI Taking Over from Humans
In the sphere of private governance, automated dispute resolution is already with us (
Katsh and Rabinovich-Einy 2017); for example, eBay’s automated Resolution Centre handles millions of disputes each year. Meanwhile, in the public sphere, it is reported that China is leading the way with AI Internet Courts which, according to
Santosh Paul (
2020), are dealing with a range of disputes, including those arising from e-commerce and concerning IP issues (see, too,
Chesterman 2021, chp. 9). What should we make of this? What governance work
can smart technologies actually do? To the extent that we have the tools to automate governance, how far
ought we to rely on them? How far is automated governance compatible with our commitment to human-centricity?
There are many questions to be asked about how far smart technologies can replace humans who have governance responsibilities. In particular, where there have been well-advertised scandals about automated governance, such as the AI scandals in the Netherlands (
Heikkilä 2022) and Australia (
Starcevic 2022), we will rightly want to take a hard look at the claims being made for improved governance by technology (
Yeung 2023;
Sanchez-Graells 2024).
In the case of judges, for example, we do well to recall that they perform more than one function. While machines might be able to outperform humans in predicting how an agreed rule might be applied to an agreed set of facts, this is not adjudicating a dispute. Presented with a dispute, judges not only have to decide the case rather than predict an outcome, they also have to find the facts, draw inferences from the facts (about the intentions of the party), interpret and apply contested concepts in the agreed rules and principles, and determine which rule is the applicable rule.
So far as the core criminal offences are concerned, we might also wonder whether we could automate determination of the mens rea requirement. Could we train AI to recognise the relevant intent or recklessness? Granted, there are many non-core offences where there is no mens rea requirement as such and it might be less challenging for AI to deal with such regulatory crimes; and, if we find it acceptable to have strict liability in these kinds of criminal offences, particularly where a fine is the standard punishment, why not also accept AI dispositions of these cases? Would we object to such applications on the ground that they are no longer human-centric (because humans are no longer central to the disposition of these cases) or would we be prepared to trade-off the advantages of AI disposition against the loss of a human decision-maker?
Another thought is that AI would have difficulty in giving reasons for decisions in the way that we expect in the common law world, whether in legal proceedings or administrative decision-making. A description of the process by which the AI arrived at its decision would look nothing like a reasoned decision given by a human, and disclosing inscrutable algorithms, even if technically transparent, would hardly assist. But, what if, in tandem with the decision-making AI, a reason-giving AI were to mimic the kind of reasoned judgment that we might expect from a human judge? Would this suffice?
For some time, it might be that the limits that we set on governance by technology, limits that are dictated by our interpretation of human-centricity, largely track those governance functions that we are confident can be reliably performed by the technologies. It could be that human-centricity will only become a material constraint when the technologies get significantly smarter.
6.4. Technological Management
Already the governance of online platforms is a major challenge and, looking ahead to new virtual reality environments of the kind contemplated by the developers of the Metaverse, the challenge will not get any easier (
Ball 2022). For present purposes, it not so much public governance as private governance that is the concern. For, we can expect there to be considerable use of governance by technological management, including the protection of the platform by designing the environment in such a way that any uses that might compromise the viability of the project are technologically managed (
Brownsword 2023b).
In off-line environments, comprehensive technological management might be more difficult. Nevertheless, at a modern international airport, governance is largely by means of technological management. To be sure, passengers are still bound by the ordinary rules of law that apply outside the airport, but local governance does not rely on rules in this way. Rather, the architecture and design of airports ensures that progression from the arrivals lounge onwards is not possible without passing through gates which will open only once the bar code on a boarding pass has been successfully scanned (or once facial recognition technology has confirmed passage). The architecture and design of international airports, in conjunction with the automation of processes, control the flow of passengers. This is governance by technological management of the airport spaces together with the use of smart machines (including the use of AI to profile passengers and assess the risk that they might present).
Already being habituated to this kind of governance at airports, passengers might not push back against it in the name of human-centricity. Nevertheless, by closing off various practical options to passengers, technological management precludes our taking responsibility or credit for our acts (
Kerr 2010). Where doing the required or right thing is the only thing that can be done, we are not acting freely, we are not acting as agents. That said, where technological management is applied for the sake of security, safety, and the like, we might accept it. This does not mean that, in the name of risk management, we should (or will) give governance by technology a free pass. We should (and hopefully will) continue to ask whether technological management is compatible with human-centricity, whether relative to the constitutive values of our community or, above all, relative to the generic conditions that reflect the deepest of human interests (
Gavaghan 2017).
7. Concluding Remarks
There is much uncertainty about our technological futures, about the prospects that humans and humanity have for their continuing existence given the extraordinarily dangerous technologies that are now at our disposal (
Ord 2020), and about how far we will go with governance by technology, displacing humans and rules. If this kind of governance is good for order but not so satisfactory for democracy or justice, some communities might be prepared to trade one for the other, but, in other places, humans will persist with imperfect governance. The lesson to be taken is that, in all spheres of governance and in all human communities, it is essential that the applications of new technologies are controlled so that they do not undermine the generic conditions which are presupposed by viable groups of human agents. Law’s imperfect governance relative to the expectations of a particular community is one thing; but, if governance fails to prevent the compromising of the generic conditions, that really would be catastrophic.