AI risk in developing countries

Abstract

This article seeks to find out how developing countries can ensure that algorithmic decision-making does not leave protected groups in their jurisdictions exposed to unlawful discrimination that would be almost impossible to prevent or prove. The article shows that universally, longstanding methods used to prevent and prove discrimination will struggle when confronted with algorithmic decision-making. It then argues that while some of the proposed solutions to this issue are promising, they cannot be successfully implemented in a vast majority of developing countries because these countries lack the necessary foundation. The key pegs of this foundation include: (i) a well-rooted culture of transparency and statistical analysis of the disparities faced by protected groups; (ii) vigilant non-government actors attentive to algorithmic decision-making; and (iii) reasonably robust and proactive executive branch or independent office policing of discrimination. In response, the article argues that antidiscrimination advocates need to pay special attention to these three issues if they are to ensure the existence of methods which guarantee that blind and incautious use of algorithms does not become the norm in developing countries.