Over at Edge, in a short video, we get an intriguing look at criminal justice from the perspective of neurological science.
Put all this together, as you can see here, and we discover little areas that are brighter than others. And this is all now easily done, as everyone knows, in brain imaging labs. The specificity of actually combining the centers (where information gets processed) with the actual wiring to those centers has been a very recent development, such that it can be done in humans in vivo, which is to say, in your normal college sophomore. We can actually locate their brain networks, their paths: whether they have a certain kind of connectivity, whether they don’t, and whether there may be an abnormality in them, which leads to some kind of behavioral clinical syndrome.
In terms of the Neuroscience and Justice Program, all this leads to the fact that that’s the defendant. And how is neuroscience supposed to pull this stuff together and speak to whether someone is less culpable because of a brain state?
Then you say, well, okay, fine. But then you go a little deeper and you realize, well, this brain is a very complicated thing. It works on many layers from molecules up to the cerebral cortex; it works on different time scales; it’s processing with high frequency information, low frequency information. All of this is, in fact, then changing on a background of aging and development: The brain is constantly changing.
How do you tie this together to capture what someone’s brain state might be at a particular time when a criminal act was performed? And I should have said it more clearly — most of this project was carried out asking, “Is there going to be neuroscience evidence that’s going to make various criminal defendants less culpable for their crime?”
Well, probably not. Even if this were to become reality — which it isn’t, yet — the whole focus of mens rea culpability is what the defendant’s mental state was at the time he committed the act. Even if police officers were equipped with infallible handheld brain scanners, so they could get a mental reading at the moment of arrest (and oh, the fascinating Fourth Amendment issues there!), the moment of the crime is past. The reading is not evidence of what the brain was doing five days ago, or even five minutes ago.
And at any rate, it’s not usable science yet. So why bother thinking about it now?
To his credit, the speaker, neuroscientist Michael Gazzaniga, admits as much.
Now, the practicing lawyer asks “is this thing useful, can we use it tomorrow? Can we use it the next day? Can’t? Out. Next problem.” So, after four years of this I realize, look, the fact of the matter is that from a scientific point of view, the use of sophisticated neuroscientific information in the courtroom is problematic at the present.
But then he says “it will be used in powerful ways in our lifetime.” What powerful ways? Mainly the ability to show that someone simply couldn’t have thought a certain way, because his brain doesn’t work that way. This defendant shouldn’t be punished like a normal adult, because his brain isn’t wired like a normal adult, and he could not have had the same mens rea as one would otherwise expect under the circumstances. Research is showing that children and teenagers are wired differently, as well, which could affect juvenile justice.
That’s useful for the defense. It could be a valuable tool in raising defenses showing that mens rea was lacking, because it couldn’t have existed. Not useful for prosecutors, more than showing that it was just as theoretically possible as for any normal human, which is sort of presumed for everyone anyway. So yay for science.
Another way it’s expected to be useful, however, is preventing future crimes. Stopping the next mass-murderer before he actually starts shooting kids on campus and whatnot. Of course, we immediately get creeped out the second anyone starts talking like that. Restraining people — punishing them — for stuff the might conceivably do in the future, but haven’t done yet, goes against all notions of justice in our culture.
He tries to mollify this by focusing on recidivists, and by not mentioning any particular method of dealing with them. He rolls out some impressive data — 25% of the people in prison are “measured psychopaths,” and they make up 600,000 of the 800,000 psychopaths in the country as presently measured. The actuarial data is good enough to be able to predict, with 70% accuracy, whether a given inmate will reoffend (and get caught) after getting out.
70% accuracy is nothing to sneeze at. But even 100% accuracy isn’t worth much. Even if you could guarantee with certainty that Inmate Jones will reoffend, you still can’t punish him until he actually does it. You can deny him parole, but he’ll still get out when his present term is up. You can make parole, or post-release supervision, more diligent and restrictive, but that will also expire. And there’s nothing more you can do to Jones until he actually reoffends.
For this tool to work as a useful preventer of future acts, the government would have to take away people’s liberties before they’ve done anything wrong. And our culture isn’t willing for that to happen. It would take a drastic change in the perception of individual rights against government action, in this country, before such protective measures would be countenanced. And that’s not likely to happen in this lifetime or many more. Our society is simply willing to accept the small risk of future harm that comes with our individual liberties.
The lecture ends with a nifty digression on an issue which has long plagued philosophers (who never seem to actually conduct experiments to test their hypotheses, silly geese), and which neuroscience (yay, scientific method) is now beginning to answer: whether free will, as commonly understood, even exists. The science is pointing to “no.” The mind is more of an automaton than one might think. That said, however, Gazzaniga says that responsibility is a cultural construct, and those who violate cultural rules must still be held accountable so that society can function. There is much in this, fodder perhaps for several thousand pages of treatises and monographs, not to mention countless hours of late-night dorm debates. But if the answer really is “no, there is no such thing as free will,” then where does that leave mens rea? How can any act be intentional, if everything is essentially predestined? Might as well revert to something like “everything that happens is according to God’s plan,” and leave all punishment to the afterlife. Or reinstate simple removal from society as the main purpose of punishment — lock ’em up and throw away the key, banish them, or save the expense and just hang ’em. Their mental state is irrelevant; they’re a threat to the rest of us, and that’s all that matters.
Thank goodness the idea of free will is so entrenched, then. No matter what science may learn, we seriously doubt that people will easily disbelieve their own perceived independence of thought and deed. So that particular hell is probably best left to science fiction.