2018 Newsletter 14: Knowledge Contracts I

In a previous essay,
I mentioned that we are shifting from scientific to cognitive accounts of knowledge. Interestingly enough, that shift is happening to our understanding of science itself for the official norms of science aren’t adequate to the challenges the disciplines face today.


Several scientific disciplines - most prominently psychology and the biomedical sciences - are suffering from a crisis of reproducibility. The crisis has many origins. An obvious one is widespread cheating, i.e., scientists making up data where there’s none to be found. While that happens more often than you think, a far more pernicious trend is data dredging or p-hacking i.e., collecting data without a scientific hypothesis and then testing enough hypotheses against that data until one of them passes collective norms.

What’s the worry? Why not collect enough data now and worry about its value later? Isn’t that the new norm, whether we like it or not? Companies, governments, labs and individuals are all collecting data in the hope that it can be mined for gold later (BTW, I refuse to acknowledge data as a plural term).

But consider the following political argument for installing surveillance cameras across the city: that only criminals will be caught and law-abiding citizens have nothing to worry. Is that really true? The problem is that law abiding behaviors vastly outnumber law-breaking ones (and many of those are morally correct even if law-breaking) and therefore, while a criminal is more likely to leave a trail of suspicious behaviors, we can’t work our way back from suspicious patterns and conclude criminality. Let me show you why.

Suppose there’s a way of walking - let’s call it W - that associated with criminality. Let’s say 75% of criminals walk that way and 5% of non-criminals walk that way. Let’s also assume that 5% of the population is criminal (I am assuming astronomical rates of criminality!). Finally, let’s assume that the city has invested in a computer vision system that identifies W with 100% accuracy. Now suppose that the computer vision system has detected a case of W. What’s the chance you’re seeing a criminal?

It's not that hard to calculate but the outcome is surprising. Suppose there are 100,000 people in this town. 5% are criminals, so 5000 criminals in total. 75% of them walk like W, so 3750 W walking criminals. 95000 non-criminals of whom 5% walk like W, so 4750 walk like W. Assuming that the camera detects people at random, the possibility of an innocent person walking like W is 4750/8500, i.e., about 55%. In other words, more than half the people being tagged suspicious are innocent. Now imagine something worse: suppose there’s a list of 10 suspicious behaviors and an individual has a 5% chance of possessing any one of them. If these are independent behaviors, the probability that you have one of them is 1- (.95)^10, i.e., about 40%. In other words, despite being innocent you have a 40% chance of being labeled a criminal because you’re going to meet one of the 10 criteria for enhanced suspicion. BTW, this is exactly what scientists do while p-hacking - they are going on a fishing expedition when they test one hypothesis after another until one of them strikes gold, even though they have no idea what they are doing.

That doesn’t sound right does it? Especially if being tagged so enters you into a police database that you can’t inspect and that in turn triggers other state behaviors so that you are scrutinized when flying or buying property or getting a job or you name it. Isolated cases of knowledge failure are bad enough, but they are worse when those failures trigger other system-wide responses. Which brings me to:

Knowledge Contracts

A good friend of mine who was also my office mate at that time quit his Phd and the shared office one Monday. His reasons for doing so were simple, if ruthless: he said he either wanted to win a Nobel Prize or make a hundred million dollars. Nobel was the first choice but in order for his work to be a serious contendor, his PhD would have to be in a field that would win someone a Nobel prize. Therefore, a field that wouldn’t win anyone a Nobel was a useless field and having sunk so many years in this field, he couldn’t start from scratch in some other academic discipline. Unlike the 100 mill quest, where he could start from scratch. Bye bye.

My friend might be an outlier (he's well on his way to 100 M) but his attitude explains why scientists are so protective of their data and keep hacking at it until it confirms a hypothesis that will get them published in a prestigious journal. Jobs are scarce, tenured jobs even scarcer and the pay is low until you get a tenure track job. Hoarding data and hypotheses is a (seemingly?) rational response to scarce resources.

What can we do about it?

One obvious way to address it is to agree upon a new collective contract. I want to call it a social contract, but it’s better termed a knowledge contract. Here's a promissory note: a social contract serves as a definition of society, i.e., those who are bound by that contract. Knowledge contracts are bigger - they extend beyond society to all the earth's inhabitants. That discussion is for another day.

Scholarly work has plenty of knowledge contracts already. For example, scholars demand that arguments be backed by evidence; that hypotheses be falsifiable and so on. As the reproducibility scandals show, these contracts aren’t enough. A falsifiable hypothesis is useless if other researchers can’t falsify your hypothesis because they don’t know how you collected your data and how many other hypotheses you rejected before landing on the one you are seeking to publish. Not only should this hypothesis be falsifiable, you should also leave a trail of other hypotheses you have already falsified.

As the pursuit of knowledge becomes more complex, we can’t be satisfied with norms that regulate individual acts, whether those be individual experiments or individual hypotheses. We also have to pay attention to how the facts connect to each other:

☐ is my data exposed to the public?

☐ can everyone access it in a format readable by standard protocols?

☐ do you have a record of how many hypotheses you tested?

These norms can come into force only through collective action - and scientific inquiry will be stronger for it once we all agree: robust, reliable and replicable data is a common good if it exists.

Back to the Real World

Enough about science already - why should you care? Because every one of the problems that affects science afflicts the larger world too. Bullshitting, fake news and “alternate facts” should worry us all.

Consider climate change. It’s been shown to be a true claim in an overwhelming number of studies. Yet, if I pick one counter-hypothesis (say, there’s a secular, non-anthropogenic trend towards warming) and test it against the data of a thousand different studies, I will probably find one that bolsters my counter-hypothesis. I can then blow up that counter-example through the million media channels I control.

Then there’s the rash of claims and counter-claims about fake news; how do I know which claim is true when neither side provides a trail of trust all the way back to the source? This is a real problem, when all news is seen as fake or compromised, those with the most money or power to propagate their interests are going to win because they control more channels. At the same time, journalists and others have to protect their sources who will be in danger if their identity is revealed.

How can we:

  1. Collect data/conduct interviews securely?

  2. Prove that the data was indeed collected appropriately?

  3. Have anyone verify through an equally secure method that the data was collected securely.

  4. And do all of the above without revealing who was the source of the data.

In answering these questions we will have to tour a brave new world of zero knowledge proofs, decentralized ledgers such as blockchain while keeping our focus on knowledge and ethics rather than technology.

To be continued.