Should we program self-driving cars with morals?

Audi Car

If you’ve read my previous thoughts on self-driving cars, then you should have gotten the idea that I’m skeptical about the whole thing.

I’ve revised my opinion a little.

I actually don’t mind self-driving cars. While there are risks to them, overall, they’ll probably bring more safety to the road. What’s more, the driver/passenger person might actually be able to get something else done. (I’m not saying they should, I think they should still keep their eyes on the road.)

But I was wondering. How would a self-driving car respond in a worse case scenario? Yes, the car is supposed to be extremely safe, but what would it do if all the possible actions were bad?

Here’s an example. Suppose you’re riding your fancy Faraday Future FFZero1 in a nice suburban area. Suddenly, two children run out from behind a parked car and in front of your vehicle. Your car didn’t detect them. It can’t stop in time. Now it has two options: run straight through the children while slowing down, or swerve and hit the parked car, which has the potential of harming you. Which does it do?

It’s a morbid question, but one that must be asked. Who decides this outcome?

Well, obviously not the car. It’s the designers.

The best scenario is that the design team figured out how to avoid such situations. But in a worse case scenario, will it choose to kill the children, or kill its passenger?

To be honest, I won’t buy a smart car with a history of killing its passengers.

Here’s another question, however. Would a company receive more flak if their car killed one passenger, or several bystanders? Because I think companies will decide based on what would be the most beneficial for themselves. Which might not necessarily be the best for the driver.

There are solutions to that problem. Manufacturers could allow buyers to reprogram the car’s morality settings (if you could even call them that). That is, the car asks the driver a series of questions like the one above, and will respond according to the driver’s answer as and when the situation arises. I personally think such a test would be uncomfortable and morbid, but it might be the only way to keep the law, and everyone else, happy.

There’s also a problem with fault. Given that the driver isn’t driving the car, they aren’t really to blame. So is the company at fault? Agreed, some companies have said that they will take responsibility, which is either a complete admission of trust in their system, or a one way ticket to bankruptcy.

I now agree that the self-driving revolution will make our world a safer place. But when there are accidents, there will be tough questions to answer.

Before we plunge headfirst into the future, we must be ready to answer those questions.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at

%d bloggers like this: