La tyrannie de la moyenne

Comment la logique de la moyenne, amplifiée par l’IA, marginalise les différences humaines et renforce les inégalités.

24 mars 2026
Graphique courtoisie de : iStock.com/XH4D

Statistics has arguably become the backbone of academia. It bestows value to research insights, showing they can be generalized beyond the representative study participants, and sorts conjecture from evidence, demarcating scientific truth. Beyond research it is used to balance grading, rank and reward students and faculty, and guide consequential decisions.

Statistics is also the engine of artificial intelligence. AI decision systems use statistics to find and match a given target. Statistics predicts the next word or phrase in large language models and optimizes content delivery according to factors such as attention and popularity. At its most basic, AI is a statistical replicator

Yet, over my 45-year career in inclusive design and higher education, my research has been guided by people who are “out of distribution,” who are deemed to be the noise or outliers in population data sets. People and communities whose data is too heterogeneous to achieve statistical power; whose circumstances are too complex and unpredictable to be able to isolate conditions. Collectively they are a large number, and with every human crisis their numbers are increasing.

Even before AI, I experienced how statistics harms marginalized minorities while it benefits the majority, contributing to corrosive disparity. For several decades I’ve been amassing data by asking people I encounter the open question “what do you need to thrive and participate fully?” Because the resulting data set is so multi-faceted, the only way I can plot it is by using a high-dimensional multi-variate scatterplot. I call it the “human starburst”: 80 per cent of the data points are clustered in the central 20 per cent of the space and the remaining 20 per cent are distributed in the peripheral 80 per cent of the space. Out at the periphery, where the data points are increasingly further apart and more different from each other, you find the needs of disabled people and people experiencing intersectional barriers, the people who are struggling. Because of economies of scale, almost everything in our society works for the 80 per cent of the needs clustered in the middle; it works less well as your needs diverge from the centre, and it doesn’t work for you at all if your needs are outliers.

This pattern permeates every societal domain, guiding our choices regarding what is given priority and valued, including in markets, service, education, employment and media. It also applies to what we determine to be scientific truth about a population. Any statistically based finding is accurate for the average person, inaccurate as you move from the average, and wrong if you are at the edges of the data plot.

Applied to humans, statistics is grounded on the assumption of the average person, a concept that traces back to the theories of the nineteenth-century French mathematician Adolphe Quetelet. Quetelet, whose ideas were later used to justify eugenics, asserted: “If an individual at any given epoch of society possessed all the qualities of the Average Man he would represent all that is great, good, or beautiful.” And conversely: “Everything differing from the Average Man’s proportions and condition, would constitute deformity and disease.”

AI amplifies, accelerates and automates this existing prioritization. In education, as elsewhere, it is poised to make everything worse for people who are already struggling and everything better for people who are already doing well.

We see AI in our learning management systems, admission systems, proctoring tools, plagiarism detectors and productivity tools. In the present form, difference from the optimal patterns, and thereby diversity, will be inexorably discouraged and eliminated. Divergence from the target patterns will be flagged as suspicious. Instructional tutors reshape divergent learners to match statistically determined optima; proctoring systems flag divergent behaviour as suspicious; students are offered statistically predicted content; admission departments are assisted in selecting students that match prior success patterns; student services are fed responses that serve the average students; AI hiring tools filter out candidates who differ from past optimal employees; and productivity monitors discourage and punish deviation from an optimal pattern, including when serving students with anomalous needs. An education system infused with current AI tools will perniciously mechanize the eugenicist’s ideals.

Protection systems that monitor and certify ethical AI also rely on statistics. Risks and harms to people who are outliers or marginalized minorities are deemed to be statistically insignificant or merely anecdotal.

The magnifying mirror of AI offers an opportunity to reflect on our conventions and assumptions to rebalance the pervasive inequities. At the Inclusive Design Research Centre, we work with the disability community and other partners to identify and address key accessibility challenges. Humans still control AI. We can design AI to value difference, inverting the algorithms from data exploitation to data exploration, to find the missing perspectives, for example, in admissions and hiring. We can adjust our metrics to value the human edges where we find the early warning signs of crisis to come, the greatest diversity, and the most generative ideas for truly innovative change. 

La lecture hebdomadaire
du milieu de l’enseignement supérieur au Canada
Rejoignez des milliers de personnes qui reçoivent chaque semaine des conseils de carrière, des nouvelles, des chroniques d’opinion et des reportages d’Affaires universitaires.