Researchers think that they can predict criminals using facial features—here’s why it won’t work

Motherboard

At the Shanghai Jia Tong University, AI researchers have built a system able to identify criminals 9 out of 10 times just by their facial features.

In Wisconsin, the police have implemented a test that criminals are required to take that decides how dangerous they are to society, and from that result, how long they should be imprisoned.

Wave goodbye to the sunny future with flying cars and a colonised Mars and say hello to our future of pre-crime and Big Brother (More like big bother, but whatever). The fact is, AI is not only out to get your job, now, it’s also out to get you.

Why governments and researchers think an AI-run criminal justice system is a good idea after watching movie after movie where it fails beats me. Maybe they think that they, with all of their brightness and intelligence, will create the perfect system.

Well, you don’t need to wait and see to wonder whether it’ll fail. Because today, I’ll tell you why it’s bound to fail.

First of all, it’s subjective. The system used by the Wisconsin police shows that well enough. Basically, the assessment involves asking the criminal questions, and based on their responses, the computer assigns them a number from 1 to 10 showing how dangerous they are.

The question is, how does a subjective person make and objective question? Because if a tester writer thinks poverty causes crime, well, then, the poor would be in trouble if they had to take that test.

Consider this: when someone asks a question, they’re looking for information they consider important, or at least a reaction they consider important. If I ask you, “what is your favourite book?” and you replied, Dick and Jane, I would infer you’re either under five or rather plebeian. As you can see, any given response would result in a assessment, either good or bad.

The problem with writing such tests for criminals is that two people giving the same answer on the test might have different reasons for putting that answer. And when the tester draws a conclusion from the answer, it might not necessarily be the right conclusion, which, of course, would result in an incorrect sentence. Refer again to my question. What if the guy said his favourite book was Dick and Jane because he was the author? Or because it helped him or her learn English? As you can see, neither of my conclusions considered those answers.

As it turns out, blacks have a 45% higher chance to get a higher score than a white defendant with the same background, so, really, you don’t need to take my word for it that the system is flawed.

Another problem with such tests is that they fail to define criminals. In the case of the researchers in Shanghai, they claim that for the most part, their AI can root out criminals because they fed it images of thousands of convicted criminals and detected key facial features criminals share.

And therein lies the problem. How do you know that’s what all criminals are like? After all, not everyone who’s in a jail is (or was) actually a criminal, and not everyone walking free is innocent. It could be that the criminals that haven’t been caught yet won’t be caught by the facial detection method because they look incredibly different from a “traditional” criminal. The only thing that the AI would accomplish if implemented is the laying off of several thousand police officers.

Of course, everyone is captivated by how “objective” computer systems are. Oh, the beauty of cold, hard logic. Especially when it’s being controlled by a police force (or government, more likely) out for revenge. And now we come to the third problem.

It’s easy to call a judge or a police officer foolish or idiotic if they make an incorrect judgment, but how can you call a formula idiotic? Well, of course you can, but the thing is, most people don’t. It’s math, and math sounds complicated. And anyway, anyone spending that amount of time on a formula must be right. Right?

The problem that arises in this case is when a government decides to secretly “edit” the mathematical formulae to criminalize all of their political opponents and land them in jail. Sure, you can call a tyrant who jails opponents evil. But how do you call the formula that decides it evil? It’s just math.

Basically, I’m saying that such a system can be far more easily perverted than our current system.

AI, to be blunt, is new. Many marvel at it for its “intelligence”, and many more ooh and aah at the cool, hard logic of computers. But we must never forget that behind every computer, there is a human who built it, who decided what got to go into it, and decided what would comprise its “cool, hard logic”. Which suggests that their “logic” might be just the same as ours, just without the yelling and sarcasm.

To be clear: humans aren’t that logical either. But at least we can see them and understand them, and they can see us and understand us. Unlike some stupid formula.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

%d bloggers like this: