This article seeks to investigate how developing countries can ensure that algorithmic decision-making does not leave protected groups in their jurisdictions exposed to unlawful discrimination that would be almost impossible to prevent or prove. The article shows that universally, longstanding methods used to prevent and prove discrimination will struggle when confronted with algorithmic decision-making. It then argues that while some of the proposed solutions to this issue are promising, they cannot be successfully implemented in a vast majority of developing countries because these countries lack the necessary institutional foundation. The key features of this institutional foundation include: (i) a wellrooted culture of transparency and statistical analysis of the disparities faced by protected groups; (ii) vigilant non-government actors attentive to algorithmic decision-making; and (iii) a reasonably robust and proactive executive branch or an independent office to police discrimination. This article argues that antidiscrimination advocates need to pay special attention to these three issues to ensure that the use of algorithms in developing countries is contemplative and avoidant of proven negative and discriminatory outcomes.