Yet, reality calls into question the adequacy of this pro-transparency argument. For example, individuals have a right to review and correct credit records (as well as many other personal data sets, such as health records), yet very few do so (Hunt 2005). The public (and the media) usually shies away from the close analysis of the technical mechanisms of algorithmic analysis that such disclosures might require (Lenard and Rubin 2013). Even if transparency somewhat improved the accuracy of algorithmic processes, the aggregated costs of facilitating disclosure (and the losses that mount as a result of public scrutiny) render it costly. Once we acknowledge such factors, transparency does not appear to substantially enhance social welfare.
Indeed, the algorithmic credit scoring system strives to predict future impermissible behavior (such as defaulting on loans) while relying upon a set of behavioral proxies. If transparency allows identification of behavioral indicators of credit risk, individuals will try to avoid being linked to these behaviors and indicators. Yet, the overall negative outcome of individual behavior need not change. In other words, with full transparency, monitored individuals will sidestep proxies even while still engaging in risk-generating behavior. For instance, they might refrain from using their credit cards at discount stores (a possible negative proxy) but continue to spend in general. Additional study must follow to establish whether this problematic outcome is inevitable or might be limited through the use of broad or ever-changing proxies. Nonetheless, this discussion emphasizes that transparency could have a substantial cost, lead to the failure of accurate predictors, and thus decrease welfare....(122-123)
Yet transparency, or disclosure-related solutions, might prove insufficient and amount to be a mere political compromise.... Indeed, the nontransparent nature of the algorithmic processes need not be blamed for generating these forms of unfair outcomes. Other measures might mitigate this concern and should be considered, such as prohibiting the use of aggressive and seductive marketing schemes. In the context of consumer credit rating, limits on the aggressive marketing of problematic financial instruments, such as those including balloon rates, could be implemented. In sum, this ‘‘unfairness’’ issue is serious but might be directly addressed through transparency or other measures. (124)
Yet, counter to the previous comment, transparency could also exacerbate this unfairness-based concern....transparency works both ways: the public gains more information but, as a result, so do special interest groups. Transparency allows special interest groups to act quickly and influence decisions—actions that often bring about unfair outcomes to weaker population segments. For this reason, budgetary discussions are held in secret and only disclosed after matters are concluded (Vermeule and Garrett 2006).
As an example, consider the prospect of fully transparent (and automated) credit scoring systems. With these in place, special interest groups could quickly move into action and try to influence the process so that specific factors will not be considered a problematic proxy when formulating the credit score (e.g., lobbying by discount store owners to remove purchases at these stores from the list of negative factors). Similarly, groups could lobby to include membership in specific associations as a signal of creditworthiness (consider unions as well as the American Medical or Bar Association lobbying on behalf of their members so that membership in these groups indicate creditworthiness). Lobbying obviously increases unfair outcomes of the processes mentioned because it facilitates a biased decision-making process that systematically benefits stronger and well organized social segments (and thus is unfair to weaker segments).
While this pro-opacity point might be argued with various degrees of success in almost all contexts involving the planning of public policy, it is worth emphasizing in the context of governing algorithms and automated processes. These processes promise detachment from political and economic tensions and influences. Yet, transparency can potentially undermine the promise of any form of insulation and subject these automated processes to pressures that commonly lead to unfair outcomes. (125-126) Tal Zarsky (2016) "The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making" Science, Technology, & Human Values 41.1
Last week, I mentioned in passing a
n already influential (2016) review article by Mittelstadt (et al) on the ethics of algorithms. I intended to return to its definition of algorithm. But in reading it, I noticed that Zarsky's essay quoted above plays a prominent role in it in three ways: it is cited a lot; it is cited as the authority of a number of prima facie controversial insights; and in such cases often the sole authority. Since I am relatively new to the literature,* the references to his piece stand out to my hungry eyes. Indeed, his fine essay combine the virtues of clarity, brevity, and significance. The controversial claims are reported in understated prose. (I cannot tell you what a relief that is in an academic culture characterized by inflationary language.)
Zarsky is a trained lawyer who, as the quoted passages reveal, clearly has been shaped, perhaps indirectly, by law and economics. Here I want to single out four of his commitments: first, transparency may not generate "social welfare" because (i) there may not be an actual demand for it; (ii) transparency may not be worth the cost. It is a bit unfortunate that Zarsky does not explore to what the degree the cost of disclosure is intentionally kept high (by companies and, perhaps, regulators) which would explain the revealed preference of consumers.
Second, since information is valuable, transparency would favor special interests in the legislative process who can find ways to rent-seek. This is an important insight, but I worry Zarsky fails to note that this also impacts his own proposals. (I return to this below.)
Third, Zarsky the harms in different forms of opacity can offset each other so long as they are independent ("a variety of algorithmic decisions constantly impact upon our lives in unrelated context," (128). It is not clear what his source of confidence is. But let's leave that aside. For, more important, he does recognize that some "process outcomes might generate a disparate impact (i.e., implicating a racial minority to a greater degree than their representation in the general population)." (126) I return to this below.
This fourth point with its post facto mitigation strategies reminded me of the way, in economics, it used to be assumed for any policy, that society could compensate losers from the efficiency gains. This separation of compensation from the policy itself makes the implementation of compensatory schemes vulnerable to shifting political coalitions and rent-seeking. (Some other time I return to this!) It is foreseeable that an emphasis on mitigation of moral risks and welfare losses in the application of algorithm (in the way conceived by Zarksy) has the same function as the way compensation operates in economics. Rather than solving the underlying problem, the solution is left by those who often have little interest to do so.
Let's combine these four points with the existence of mutually reinforcing (
causally intersectional) asymmetric vulnerabilities when applying algorithms. A focus on mitigation oriented toward regulators and third parties means
ipse facto making the asymmetrically vulnerable hostage to the special interests and politically powerful. And while I am open to the argument that this is the best solution, it's foreseeable that in some political contexts (especially where the political system is zero-sum oriented) rather than mitigate harms this may well reinforce some of the possible harms. That is to say, we have a clear diagnosis why we
need an ethics of Algorithmic mediation.
That is to say, rather than focus exclusively on mitigation, I advocate that the engineering, design, and implementation of algorithmic mediation software presuppose asymmetric harm prevention in its very success conditions. Or anyway, I hope you will join forces!
Comments