Governments around the world «are increasingly turning to algorithms to automate or support decision-making in public services». We can go to even greater lengths and state that algorithms are “eating” the negotiation ground of any present-day social contract, however defined, and this trend will only accelerate in the foreseeable future.
Algorithms will be soon part of any policy implementation, or, for that matter, of any model of society. They will define the way people and organizations interact with each other and with non-human entities.
The adoption of algorithms on a larger and larger scale is justified by the pursuit for Weberian efficiency (better allocation of public resources), by a desire to increase transparency and impartiality by removing bias inherent in human-made decisions (even though this is a tricky point, as regulators are arguably trying to stand up for “human in the loop” as a stronghold of digital rights), or by the sheer volume of available data or the complexity of the decision-making process.
However, in the less-than-ideal world we are living in, algorithms can also have an adverse effect on transparency, and be treated just as a black box or an oracle. From a broader perspective, algorithmic accountability depends upon the solution of problems in theoretical computer science and mathematics, which in their general form might prove out of reach for many years to come. However, in practice it is rarely the case that a specific product cannot be provided with an accountability “safety net”, without the need to go down the rabbit hole of explainable artificial intelligence. This is a basic fact of life which policymakers need to be aware of.
And this study by Ada Lovelace Institute, AI Now Institute and Open Government Partnership is a great primer to understand the first “wave” of algorithmic accountability from a policymaking point of view.