Challenges around the identification and mitigation of racial, socioeconomic, gender or other disparities are central to the new subfield of machine learning studying ‘algorithmic fairness’. Bias appears at various stages of data and algorithm use, from the initial decision to use predictive software for a specific problem and what variables to predict, to data collection and algorithmic design. Whether a method is fair, as in whether humans would agree that the decisions made are fair, often becomes apparent only after the fact when patterns of discrimination reappear.
Mass atrocities do not occur in a vacuum. They are enabled by a present normalization of a lengthy previous history, a process that the philosopher of mass killing Lynne Tirrell labels the social embeddedness condition