Research | Journal Articles
The Rise of AI-based Decision-Making Tools in the Criminal Justice: Implications for Judicial Integrity
View Abstract
Discusses the increasing use of artificial intelligence (AI) in decision-making, specifically machine decision tools in criminal justice proceedings, and the justifications for their use. Evaluates their benefits and drawbacks, and their potential risks to judicial integrity and the rule of law.
Regulation by blockchain: the emerging battle for supremacy between the code of law and code as law
View Abstract
Many advocates of distributed ledger technologies (including blockchain) claim that these technologies provide the foundations for an organisational form that will enable individuals to transact with each other free from the travails of conventional law, thus offering the promise of grassroots democratic governance without the need for third party intermediaries. But does the assumption that blockchain systems will operate beyond the reach of conventional law withstand critical scrutiny? This is the question which this paper investigates, by examining the intersection and interactions between conventional law promulgated and enforced by national legal systems (ie the ‘code of law’) and the internal rules of blockchain systems which take the form of executable software code and cryptographic algorithms via a distributed computing network (‘code as law’). It identifies three ways in which the code of law may interact with code as law, based primarily on the intended motives and purposes of those engaged in activities in developing, maintaining or undertaking transactions upon the network, referring to the use of blockchain: (a) with the express intention of evading the substantive limits of the law (‘hostile evasion’); (b) to complement and/or supplement conventional law with the aim of streamlining or enhancing compliance with agreed standards (‘efficient alignment’); and (c) to co-ordinate the actions of multiple participants via blockchain to avoid the procedural inefficiencies and complexities associated with the legal process, including the transaction, monitoring and agency costs associated with conventional law (‘alleviating transactional friction’). These different classes of case are likely to generate different dynamic interactions between the blockchain code and conventional legal systems, which I describe respectively as ‘cat and mouse’, the ‘joys of (patriarchial) marriage’ and ‘uneasy coexistence and mutual suspicion’ respectively.
Why do public blockchains need formal and effective internal governance mechanisms?
View Abstract
With the birth and rise of cryptocurrencies following the success of Bitcoin and the popularity of “Initial Coin Offerings”, public awareness of blockchain technologies has substantially increased in recent years. Many blockchain advocates claim that these software artefacts enable radically new forms of decentralised governance by relying upon computational trust created via cryptographic proof, obviating the need for reliance on conventional trusted third-party intermediaries. But these claims rest on some key assumptions, which this paper subjects to critical examination. It asks: can existing mechanisms and procedures for collective decision-making of public blockchains (which we refer to as internal blockchain governance) live up to these ambitions? By drawing upon HLA Hart’s Concept of Law, together with literature from regulatory governance studies, we argue that unless public blockchain systems establish formal and effective internal governance, they are unlikely to be taken up at scale as a tool for social coordination, and are thus likely to remain, at best, a marginal technology.
Algorithmic regulation: a critical interrogation
View Abstract
Innovations in networked digital communications technologies, including the rise of ‘Big Data’, ubiquitous computing and cloud storage systems, may be giving rise to a new system of social ordering known as algorithmic regulation. Algorithmic regulation refers to decision-making systems that regulate a domain of activity in order to manage risk or alter behaviour through continual computational generation of knowledge by systematically collecting data (in real time on a continuous basis) emitted directly from numerous dynamic components pertaining to the regulated environment in order to identify and, if necessary, automatically refine (or prompt refinement of) the system’s operations to attain a pre-specified goal. It provides a descriptive analysis of algorithmic regulation, classifying these decision-making systems as either reactive or pre-emptive, and offers a taxonomy that identifies 8 different forms of algorithmic regulation based on their configuration at each of the three stages of the cybernetic process: notably, at the level of standard setting (adaptive vs fixed behavioural standards); information-gathering and monitoring (historic data vs predictions based on inferred data) and at the level of sanction and behavioural change (automatic execution vs recommender systems). It maps the contours of several emerging debates surrounding algorithmic regulation, drawing upon insights from regulatory governance studies, legal critiques , surveillance studies and critical data studies to highlight various concerns about the legitimacy of algorithmic regulation
Five fears about mass predictive personalisation in an age of surveillance capitalism
View Abstract
This article argues that the digital revolution and the ‘free services’ business model which Shoshana Zuboff claims has provoked a shift from industrial capitalism to ‘surveillance capitalism’, has given rise to a new form of production which the I call ‘mass predictive personalisation’. Although it shares with industrial production the capacity to operate at scale (and, thus, on a ‘mass’ basis), mass personalization can be distinguished from mass production in at least five ways: (1) it is primarily concerned with the provision of ‘services’ (although it is increasingly including the personalization of ‘goods’); (2) rather than generating identical units of production, services are ‘personalized’ to each user, tailored to fit his or her individual tastes, interests, preferences, lifestyle, and behaviours; (3) service provision operates, by default, on a ‘predictive’ basis owing to the application of advanced algorithmic profiling in order to ‘infer and predict’ each user’s service preferences with the aim of ‘anticipating’ user needs and ‘pushing’ personalized services to them accordingly; (4) services are continually and automatically reconfigured in light of the feedback gleaned from monitoring the recipient’s response to the service without requiring the recipient to provide active and intentional feedback concerning their interest in the service thereby provided; and (5) although recipients of the service can be characterized as consumers, in the same way that those consuming factory-produced outputs under industrial capitalism may be understood as consumers, users are also concurrently providing raw materials and labour to the producer, composed of both the personal data that they generate via their digital interactions and the resulting ‘data exhaust’, the digital by-products incidentally generated from these interactions and which can be used by the service provider to train and improve their algorithms.
I argue that the move to mass predictive personalization is likely to signify a major shift in our ‘modes and culture of consumption, generating a number of potential dangers. Intended primarily as a provocation rather than a comprehensive critique, I identify five fears (or worries) that the rise of mass predictive personalization may portend for these collective values and commitments:
(1) It expands opportunities for consumer exploitation
(2) It enables subtle but powerful manipulation of individuals at scale
(3) It systematically marginalizes and excludes ‘low-value’ individuals
(4) It perpetuates structural inequalities and exacerbates distributive injustice
(5) It fuels a culture of narcissism, prioritizing economic morality over social equality thus eroding solidarity and community
Big Data and Personalised Price Discrimination in EU Competition Law
View Abstract
The networked digital revolution is ushering in a new data-driven age, powered by the engine of Big Data. We generate a massive volume of digital data in our everyday lives via our on-line interactions, which can now be tracked on a continuous and highly granular basis. Being able to track this data has radically disrupted the retail sector through, amongst other things, digital personalisation. However, this is no longer limited to shopping recommendations and advertising delivered to our smartphones, laptops and other mobile devices, but may extend to the prices at which goods and services are offered to customers in on-line environments, making it possible for two individuals to be offered exactly the same product, at precisely the same time, but at different prices, based on an algorithmic assessment of each shopper’s predicted willingness to pay. This is done by mining consumers’ digital footprints, using machine learning algorithms to enable digital retailers to predict the price that individual consumers (‘final end users’) are willing to pay for particular items, and thus offer them different prices. This phenomenon, which we dub ‘algorithmic consumer price discrimination’ (ACPD) forms the focus of this paper. The practice of price discrimination, which we define as “… charging different customers or different classes of customers different prices for goods or services whose costs are the same or, conversely, charging a single price to customers for whom supply costs differ…” is hardly a new phenomenon. Familiar forms include loyalty discounts, volume or multi-buy discounts, and the offering of status based discounts for students, old-age pensioners and the unemployed. However, the technological capacities of Big Data substantially enhance the ability of digital retailers to engage in much more precise, targeted and dynamic forms of price discrimination that were not previously possible. There are many areas of law that might mount a response to rising public anxieties associated with these practices. Our paper examines ACPD from the perspective of competition law through which we seek to evaluate ACPD by reference to two contrasting normative values: economic efficiency, on the one hand, and fairness or equity on the other. Competition law provides a unique lens for interrogating the social implications of ACPD due to its distinctive character as a form of ‘economic law’ that is intended to protect and strengthen the process of rivalry in the marketplace. Although ‘traditional’ forms of price discrimination have long been the subject of economic analysis to evaluate whether they are economically efficient, algorithmic price discrimination has hitherto attracted relatively little critical analysis. As we demonstrate in Section 2, the incentives for firms to engage in ACPD often exist. We find that consumers are in the aggregate often better off, economically, when sellers can price discriminate in this way, thereby enhancing consumer surplus. However, this is not always the case. Furthermore, whether EU competition law is solely and exclusively concerned with economic efficiency, or whether it provides scope for non-efficiency based considerations in the application of its provisions, is a matter of debate. Accordingly, in Section 3 we evaluate ACPD by reference to its fairness or justice (which we also call equity) understood in three distinct (and sometimes overlapping) ways: (a) the perceived fairness of pricing practices; (b) unfair dealing between online retailers and consumers (corrective justice); and (c) fairness as a requirement of distributive (or collective) justice. For each of these understandings of fairness, we identify points of convergence and conflict with economic evaluations of the effects of ACPD on aggregate consumer welfare. No Article 102 cases have directly considered the legality of ACPD. Section 4 therefore interrogates existing Article 102 case law to ascertain whether ACPD would likely breach this provision. Because the current legal position is unclear, Section 5 draws together the efficiency and fairness evaluations by considering whether ACPD should be regarded as unlawful under EU competition law. We argue that where ACPD increases both consumer surplus and fairness, it should not breach Article 102. Conversely, where ACPD undermines both consumer welfare and fairness, then such practices should be unlawful under Article 102. However, because economic and fairness evaluations of ACPD may conflict in specific cases, Section 5 also considers whether, in the light of the underlying justifications for EU competition law and the EU’s foundational principles, ACPD should be considered a violation of Article 102 where it undermines justice or equity, even though it may enhance consumer surplus, and vice versa. We deal with the clashes between these goals in two ways: first, we offer a partial reconciliation between these goals, by supplementing conventional economic analysis with insights from behavioural economics, thus enabling some fairness considerations that affect consumer welfare to be taken into account. Secondly, we suggest that fairness should have a secondary role when Article 102 is applied to ACPD, in the form of a ‘defence’ to an allegation of abuse of market power. On our suggested account, ACPD which reduces consumer surplus may nonetheless avoid falling foul of Article 102 if it can be justified on grounds of fairness. Section 6 concludes, suggesting that EU competition law may have a valuable but limited role to play in redressing some of the adverse impacts of ACPD, primarily by focusing on the consumer welfare effects of ACPD, and in which considerations of fairness and justice play a relevant, but nonetheless subsidiary, role. Competition law cannot, and should not, seek to solve all the social problems associated with market behaviour, including data-driven forms of personalised pricing.