Let us note that the formulation of process reliabilism remains largely unmodified by computational reliabilism, as it is evidenced in (CR). An important – and rather obvious – difference, however, is that process reliabilism is no longer a general account for any p and m, but rather specified for computational undertakings. In this respect, computational reliabilism takes that p is a truth-valued proposition related to the results of a computer simulation. These could be particular, such as ‘the results show that republicans have won,’ ‘the results suggest an increase of temperature in the Arctic as predicted by theory’, and ‘the results are consistent with experimental results,’ among others. Alternatively, they could also be general such as ‘the results are correct of the target system’, ‘the results are valid with respect to the researcher’s corpus of knowledge’, and ‘the results are accurate for their intended use.’ Naturally, the reliable process m is identified with the computer simulation (see Sect. 3.2 for further differences with process reliabilism).
We can now assimilate Goldman’s process reliabilism into our analysis of computational reliabilism: researchers are justified in believing the results of their simulations when there is a reliable process (i.e., the computer simulation) that yields, most of the time, trustworthy results. More formally, the probability that the next set of results of a reliable computer simulation is trustworthy is greater than the probability that the next set of results is trustworthy given that the first set was produced by an unreliable process by mere luck (Durán 2014).-- Durán, Juan M., and Nico Formanek (2018)" Grounds for trust: Essential epistemic opacity and computational reliabilism." Minds and Machines 28.4, 654. [HT Federica Russo]*
In what follows I take Durán & Formanek's computational reliabilism (about simulations) as a baseline. This work draws on some of the best work in recent epistemology and philosophy of science, and offers a compelling framework for how to think of the strictly epistemic grounds for trust for computer simulations. In what follows I use 'algorithmic mediation' where they use 'simulations.'1
But since the pathbreaking work by Heather Douglas, we also know these epistemic grounds are conceived rather narrowly. So, here I explore how their approach might be enhanced if one takes seriously (to use a jargon from philosophy of science) inductive risk in AI. In particular, I reflect on the conceptualization of how one should internalize social consequences into the design and programming process of algorithms as they figure in contemporary AI.+ And in particular discussions over machine learning algorithms.
In this post I sketch three areas where this baseline must be enhanced, of which the latter two are specific to machine learning AI, when one takes seriously social consequences for the grounds of trust. The idea is to work toward Ethical Computational (process) Reliabilism (ECR).
First, when one looks at the application of a reliable process outside the lab into social context, one is not just interested in its reliability for a specific task. One would also like to know what the effects are of failure. In particular, one would like to know something about the distribution of possible or likely harms on different kinds of populations, especially if these populations have different kinds of vulnerabilities (and appetites for risk).
So, for example, something may function reliably as designed with industry beating low failure rates; yet when it breaks, all too rarely, the artifact may still be especially dangerous for kids. Or, some safety gears work swimmingly on average male subjects, less so on average female subjects. Some medicines interact badly with pre-existing conditions in subsets of the population. Now, in many cases the harms that follow from such selective or asymmetric vulnerabilities can be internalized in the design and testing process (and often this is mandated legally or by in-house risk assessment).
Selective or asymmetric vulnerabilities can map onto socially salient issues when they map onto political or morally salient demographics. In the previous paragraph I mentioned children and sex differences. A lot of public discussion of machine learning in AI has (quite naturally) focused on its reinforcement of racial and economic injustice(s).
How to characterize what counts as as an asymmetric vulnerability is not so easy especially because many of the ethically or politically most salient harms may only become asymmetric due to causally intersectional effects. In addition, some asymmetric harms may be due to the fact that a truthful p reinforces or entrenches a socially bad status quo. For many purposes one may wish to distinguish among such selective vulnerabilities, but here I lump together as an especially important set of unfair outcomes. So, to begin formulating a possible framework:
Here 'reliable' already presupposes an ordinary use of the reliable algorithm in an assigned task. One thing that follows from this is that in order to generate an ECR, its sources -- again following, Durán & Formanek, 1) verification and validation methods; 2) robustness analysis; 3) feedback from trial runs and (prior) implementations; 4) expert knowledge -- must also be made to seek out and track asymmetric vulnerabilities. While this clearly makes initial R&D more expensive, it may reduce litigation costs and social harms (including withdrawal fo the product) downstream.
Second, algorithmic mediation may generate both unintended and unforeseeable outcomes. Here, too, there are many subtleties. Some unintended consequences may just be a matter of negligence. And these can be simply assimilated to (ECR). Morally, legally, and politically one may be held accountable for those if there are harms in use. As it happens because of concerns over opacity and traceability, algorithmic mediation does generate some special concerns over accountability which are especially salient in the context of informational asymmetries and asymmetric vulnerabilities (with players that range from huge transnational economic agents to dispersed individuals). I return to this before long. But it is morally and politically a very important issue.
Other consequences may be unforeseeable in detail, or their tokens unknown, even though the outcome pattern (or outcome type) may be quite predictable after a while. For example, algorithmic mediation has made financial markets move (i) at much higher speeds and (ii) has also increased the likelihood of mini and maxi flash crashes. The first (i) was entirely predictable (and desired), but the (evolution of) exact speed(s) and volume of market transactions may have been unknowable in advance. And that it would generate new kinds of financial transactions was also known, even if the exact strategies were not. By contrast, it's possible that (ii) was initially unexpected. But by now any given mini-crash may be surprising or unpredictable, but that they occur is foreseeable and so they become a 'new normal.'
That is to say, unforeseeable tokens can occur in foreseeable outcome patterns/types. If an outcome patterns has possible tokens with asymmetric vulnerabilities, these patterns should, all things being equal, be avoided and ought to be internalized in ECR. (Of course, sometimes one can compensate for downside risks, etc.) So, I propose the following modification:
Obviously, this leaves the prevention, accountability, and remedy of some unforeseeable asymmetric harm patterns outside (ECR). And that is for another occasion.
Third, one instance of an asymmetric vulnerability is the reinforcement of a bad status quo. For many it seems intuitive that p (which is true after all) is ethically neutral, so that p reinforces a bad status quo can elicit shrugs from many. But we should resist the shrug. Classic examples can be found, I learned this via Kristie Dotson, in the way in which crime data is presented and thereby stigmatize vulnerable demographics or promote policies which have side-effects that also harm vulnerable demographics asymmetrically. I return to such harms before long.
But a crucial feature of algorithmic mediation is (as Mittelstadt et al (2016) note) that it can affect how our social reality is conceptualized, and becomes actionable in ways that are utterly unexpected (including a reinforcement of a bad status quo). So, algorithmic mediation can generate consequences that are not just unintended, but also unforeseeable in principle because they are transformative (Mittelstadt et al also use this terminology going back to Floridi (2014).**
Here, too, the fact that an algorithmic meditation is transformative may be intended and foreseeable. And it is possible that some of the higher order outcome-patterns including the asymmetric vulnerabilities can be predicted. And that is assimilable to (ECR). But some transformations are (ahh) transformative. And these require special treatment. To be continued.
Recent Comments