Andrei Chikatilo was serial killer who murdered at least 56 young women and children starting in 1978 until his capture in 1990. The details are as bad as one might expect, and apparently the murders and mutilations were how he achieved sexual release. His killings seemed unpredictable to investigators at the time, and even in retrospect there appears to be no clear pattern.
Now, however, UCLA mathematicians Mikhail Simkin and Vwani Roychowdhury have published a paper where they see not only a pattern, but one that is meaningful to those who might want to stop other serial killers. In their paper, “Stochastic Modeling of a Serial Killer,” published a couple of days ago, Simkin and Roychowdhury discovered that the killings fit a pattern known as a “power law distribution.” One of many kinds of statistical distribution (the bell curve being another), power law distributions are often found for out-of-the-ordinary events like earthquakes, great wealth, website popularity and the like.
First, they looked at a timeline of his killings. They saw apparently random periods of inactivity. Each time Chikatilo started killing again, however, the next murder would come soon after. And the one after that even sooner. And so on and so on until the next period of no killing.
The study doesn’t take account of the reasons for two of the longer pauses — Chikatilo’s first arrest and detention on suspicion of being the killer, and the period where the media started reporting on the investigation — but the reasons aren’t important. What’s important is being able to make some kind of sense out of the seemingly random events.
What they noticed was that, when these ever-increasing murders were plotted on a logarithmic scale, they came out in almost a straight line — indicating the possibility that a power law might be at work here. What’s more than that, they noticed that the curve’s exponent of 1.4 was pretty darn close to the 1.5 found for the power curve of epileptic seizures. What if (they wondered) the killings fit a neurological pattern? What if, like epileptic seizures, psychotic events like these killings came about when an unusually large number of neurons in the brain started firing together?
So they plugged in some givens of what is known about how neurons work, modeled on how epilepsy works. They made the model a little more realistic — seizures come unbidden when the conditions are met, but killers probably need some time to plan once their brain is ready for the next attack. Then they ran a simulation.
The simulated probabilities for the length of time between murders tracked the real-life data almost perfectly.
In other words, if you know when the last murder took place, you can calculate the probability that another killing will happen today. And the more time has passed since the last one, the less likely another will happen.
-=-=-=-=-
Fascinating stuff, but so what? The so what is that statistical analysis has become a big part of modern crimefighting. CompStat has gone from an attempt to see where crime was happening in early 1990s NYC, to a tool used by police forces across the country to predict where to post their officers tomorrow. Homeland Security has a new thing they call FAST (for Future Attribute Screening Technology) they claim to be 70% accurate in the lab, that is said to calculate the probability a given individual is planning to commit a crime. “Predictive policing” has gone from science fiction to routine in the blink of an eye.
An appreciation of what statistics can — and cannot — do is becoming a big factor in law enforcement. Stats can’t tell you who the perp is, but they’re getting better and better at figuring out where and when the next crime might happen. Police departments that react responsibly, by focusing surveillance and manpower on those points where they are most likely to do some good, have a better chance of reducing crime rates with a more efficient use of their resources.
This latest study provides yet another tool, perhaps, for better and more accurate use of statistics by law enforcement. Catching a serial killer by focusing resources based on when and where he’s likely to strike next is a hell of a lot better than relying on the junk science of behavioral profiling.
Of course there’s always the risk that the numbers will be misjudged, that the models will be faulty, that the probabilities will be turned into junk science justifications for injustice. That’s a risk any time law enforcement meets math & science.
But when the numbers aren’t used to point the finger of guilt at a particular person, but rather as a guide to help catch whoever it might be (or prevent him from striking again), then it’s not a bad thing. If it helps law enforcement protect the rest of us, without violating our rights or punishing the wrong people, then hooray for numbers.
On further reflection, the neuron-firing explanation seems to be an unnecessary complication. A simpler explanation would be that each successful act emboldened him, so he was less cautious and took less time before doing it again. Presumably, the pauses were caused by events that spooked him — and the bigger the scare, the longer the pause. And both times he got caught were periods of accelerating activity with reduced caution — so getting caught seems to be a foreseeable result.
That doesn’t mean that a power law applies here, as it does in so many cases, but it’s a likely and simpler explanation. Though sadly it remains an untestable hypothesis, as the guy was executed long ago.