The Wiki Man

Why data doesn’t always reveal the truth

18 August 2018

9:00 AM

18 August 2018

9:00 AM

In late 1973 the graduate admissions department at UC Berkeley discovered that for the forthcoming year it had awarded places to 44 per cent of male applicants and only 35 per cent of women. Concerned about possible lawsuits or bad publicity, they approached Peter Bickel, a professor of statistics, to analyse the data in more detail.

Looking for patterns of prejudice, Bickel broke down the data by university department. He was suddenly presented with a contradictory picture. Department data suggested Berkeley was mostly even-handed in admissions. Stranger still — though a minority of departments exhibited some gender bias, it was more likely to be a preference towards female candidates than the other way about.

Eh? How so? I mean a 44:35 ratio seems clear-cut, no? In total, of 8,442 men who had applied 3,741 were given a place; for women those figures were respectively 4,321 and a meagre 1,312. Prejudice, surely?

Yet among the largest six faculties, the results were as follows:

If there is anything to investigate here, it is not prejudice against women, but a reasonable suspicion that the head of Department A might be a bit of a lothario.

But which figures should we believe, and why are the two pictures so different? Well, for one thing, it makes more sense to look for bias at the level of individual departments — since that is where admission decisions are made. Next, if we want to understand what is skewing the overall result, we need to look not only at ratios but overall volumes. It turned out that applications from women were far more numerous to those departments where the proportion admitted was low (courses in the humanities typically turn away many more applicants than courses in pure mathematics or engineering). Hence the aggregate figure of 44 per cent to 35 per cent probably arose more from course preference than prejudice.

Reversals of this kind (it is known as Simpson’s Paradox), where the same data can reveal contradictory information depending on how it is presented, are not uncommon. Berkeley was lucky. It had a professor of statistics on hand to give advice. The problem today is that there are now far more statistical findings than there are people competent to analyse them. For every capable statistician, there are ten people with an interest in wilfully misrepresenting information for headline purposes, or who simply love the illusory certainty a single statistic offers them.

This is not to say that Berkeley had no work to do. Department B clearly needed to ask why so few women were interested in what it taught. But this demands a very different intervention from trying to tackle the problem at the aggregate level — thereby assuming it was an admissions problem.

We have a modern, faux–scientific assumption that all information is good — and amassing more of it makes it better. Yet averages and aggregates often conceal more than they reveal. Business and government decisions are now taken by people high up on a chain of information, who by definition only have access to information in aggregate form, with all the salient discrepancies made invisible by the act of combining it. To me this seems to be a hidden cost of the IT revolution which no one has yet sought to understand.

Got something to add? Join the discussion and comment below.

You might disagree with half of it, but you’ll enjoy reading all of it. Try your first 10 weeks for just $10

Show comments