Ethics Research & Vampire Destroyers

(Last Updated On: August 18, 2018)

Ethics researchEthics is the topic of two fascinating recent research studies. One of these studies will protect you from an evil vampire boss! The other will make you hesitate before buying or even getting into a self-driving car

But first: Ethics is a squishy term. What do we mean by ethics, anyway?

  • In philosophy, ethics references how people ought to live, pursuing right rather than wrong. It’s wrapped up in the Meaning of Life as discussed in an earlier blog.
  • In business, ethics means behavior that protects the reputation of the business. Thus, ethics means not only obeying the law but also conforming to the social norms of the moment.
  • In everyday life, ethics refers to proper behavior according to a particular standard, whether it’s one preached by a religious group or one’s personal opinion. Doing what feels right helps us be happy.

The meaning of ethics therefore depends on its context: the speaker, the situation and planned actions. When the term has such variable definitions, how can self-respecting scientists legitimately perform research on ethics?

Amazingly, social scientists have found scientifically sound ways to study ethics. They are not trying to “prove” what is ethical. After all, this sounds rather close to “proving” religious faith. Rather, they are discovering how people feel about and deal with ethical dilemmas.

Bring on the science! We’ll summarize two examples. First, we address the ethics of autonomous (self-driving) vehicles. Then we consider employees who need protection from evil, unethical bosses.

Robot Chauffeurs

Ethics research

Let’s start with autonomous vehicles. Many people long for the day that their car will drive itself to their destination while they snooze, catch up with e-mail or watch a video. And most of us want to ask the question, will my family and I be safe when I take my hands off the wheel?

However, that’s not the only thing that should make us concerned. Consider this: a self-driving car is basically a car whose driver is a robot chauffeur, a mechanical creature.

The self-driving robotic car has:

  • External Sensors: usually a video camera; a conventional radar; a laser scanner with range measurement (LIDAR); temperature; ambient light.
  • Internal Sensors: occupant sensing; occupant controls; temperature.
  • Actuators: steering; braking; acceleration; lights; horn; air bags; occupant displays; occupant “nudges” (haptics); and emergency calls to 911.
  • Computer: the electronics that takes sensor inputs, applies logical rules and then sends signals to various actuators.

The analogy with a living creature is obvious. We sense the world outside us as well as within, our internal twinges and stomach rumbles. We think about what to do. And then we do it. Moreover, when we think about what to do, we take account of the “rules of the road” and our personal ethics.

The driverless car has to be programmed with rules. Some of them are pretty obvious: if the vehicle is speeding toward a slower object, apply the brakes! But some of them, surprisingly, involve ethics. For example, what should the car do when it can’t stop in time, and wherever it steers, it’s going to kill someone?

Ethics of Driverless Cars

Ethics research

A team of researchers at the University of Toulouse, France, the University of Oregon and MIT studied self-driving ethics. Their work1 appeared in Science Magazine. Science also posted a re-statement of the article with the lurid subtitle “When should your car be willing to kill you?”2

The researchers conducted six online surveys involving almost 2000 U.S. residents between June and November 2015. They used the Amazon Mechanical Turk (MTurk) crowdsourcing service, paying each participant 25 cents for their response. Each survey included an “attention test” which weeded out the 10% of respondents who blindly answered without even reading the questions.

Each of the six surveys dealt with how a self-driving car should prioritize the lives of its occupants versus the lives of other parties, typically pedestrians.

Here’s a shortened version of the questions that were asked. They speak to the ethics of autonomous vehicles:

  • Should a self-driving car sacrifice one occupant rather than kill ten pedestrians?
  • What about other numbers of pedestrians (from 1 to 100)? What if there’s a family member in the car with you?
  • Would you buy a self-driving car that was willing to kill you to save many pedestrians?
  • Given some options of rules for the robot driver, are they moral? Would you like to see these rules in self-driving cars? Would you buy a car with these rules?
  • Should self-driving cars be legally required to sacrifice their driver to save many other lives?
  • For different kinds of robot rules, would you buy such a car?

            Kill Me To Save Ten Others???

If you have a cynical view of people’s morality then, I’m sorry to say, this research reinforces your beliefs. Participants strongly believed that other people’s cars should have programming that in an emergency situation, would maneuver so as to save as many lives as possible. However, most participants would not buy such a car: they would only buy a car that that placed top priority on saving the lives of those riding in the car.

The researchers call this a “social dilemma.” That is, everyone wants a free ride, rather than following rules that would give the best overall outcome.

What if governments were to mandate “utilitarian” algorithms, in which cars would try to save the largest number of lives? The researchers point out that in this case, many people would not choose to buy self-driving cars. This would delay the broad use of such cars and therefore many lives would not be saved.

Thus it’s likely that fewer people would die if autonomous cars had “selfish” algorithms that save the driver first. It might be true that innocent people would perish in the rare instance that the car had to make an emergency decision. However, overall more lives would be saved by reducing human-caused accidents.

            Asimov’s Three Laws of Robotics

The ethics of robots has occasioned much discussion, particularly since Isaac Asimov’s famous 1942 Three Laws of Robotics. Despite that dialogue, today we have no general agreement on the rules of robot ethics, nor a reliable way to program those ethics into an autonomous device.

Readers of this blog are smart enough to imagine ways that self-driving technology might evolve that could lead to outcomes with reasonably good morality and life-saving. The key point, and one which the researchers also made, is that the development of autonomous vehicles needs to involve considerable dialogue and discussion with the public, who will be its users or its victims depending on circumstances. And until you are personally satisfied with the ethics of your robotic chauffeur, you might want to think twice before trusting your life to him-her-it.

The Sleazy Boss

Ethics research

Here’s our other great example of ethics research.

Researchers have investigated workplace morality, in which a supervisor makes unethical, immoral, illegal or downright sleazy requests of a subordinate. Such a demand might involve a white lie about the manager’s absence from work, a claimed expense for which there is no receipt, or a sexual advance.

Do these things happen? Yes, indeed: almost ten percent of workers in a 2013 survey reported feeling pressured at work to compromise their personal code of conduct.

When a boss pressures you at work what choices do you, as an employee, have?

  • You could complain to your supervisor’s boss, which is likely to lead to retaliation, firing, or at least the reputation that you are a troublemaker.
  • You could go along with the boss, which might cause you loss of self-respect, shame, stress and elevated blood pressure.
  • As a last resort, you could quit the job and work elsewhere.

None of these are attractive options!

In a later blog we’ll look at the origins of one of these unethical situations, sexual harassment. This blog addresses a much more urgent question: How do you avoid becoming its victim?

Moral Symbols as a Garlic Necklace

Ethics research

Business school faculty at UNC and Northwestern University noted that once a boss has made a loathsome request, the employee is in a fix.

So they asked, is it possible to ward off unethical demands in advance, before the boss makes them? Their published study bears the provocative title “Moral Symbols: A Necklace of Garlic against Unethical Requests.” (Abstract3; Full Article4) By thus alluding to a folkloric protection against vampires, the researchers are labeling the evil supervisor as a vampire prepared to suck the ethical blood from his powerless, enslaved employees.

The researchers note that in the workplace, people automatically make inferences about the others around them based on social clues. They consider how others dress, how they talk and what they say, how neat or cluttered their workspace is, and visible personal possessions such as photos, books and décor. Thus it’s possible for an employee to signal to his supervisor that he is an ethical person, using small cues that the supervisor may not even register consciously.

            How Moral Cues May Influence Behavior

How do these cues protect you from unethical requests or demands? Quoting related studies, the researchers propose these protections from an unethical boss:

  • The presence of a moral cue activates the supervisor’s own awareness of right and wrong, in effect causing them to think twice about what they are doing.
  • People have an intuitive desire not to defile symbols of purity. A supervisor may be willing to make an immoral request, but will be less willing to make that request of a person who appears to be moral, because it would be “doubly wrong.”
  • A boss who makes an unethical request of an employee whom they believe to be ethical will unconsciously feel moral accountability to the employee, feeling concern that the employee will judge them unfavorably.
  • The supervisor will fear that the employee may disobey the request and report it to other authorities, even at the risk of retaliation.

The combination of these factors serves to discourage the boss from unethical requests, or at least pushes the boss to make those requests of someone else other than you!

            The Research: Moral Symbols Really Work!

Does appearing to be moral really ward off evil requests? Based on six studies that the researchers conducted, yes indeed! Five studies each involved between 68 and 210 U.S. participants; some were college students and in the case of two studies, some were people recruited by MTurk. The sixth study used 104 supervisor-subordinate pairs in India.

The full article is 58 pages long and I’m not going to try to summarize each study. However, here are some conclusions that might be drawn from the article:

  • Exposure to a moral symbol increases moral awareness and inhibits unethical behavior.
  • A leader with one or more “moral” followers will tend to act more ethically.
  • A person displaying a moral symbol appears to be of high moral character. As a result, others are less likely to ask him or her to engage in unethical behavior.
  • Subordinates are not powerless, but can exert a significant influence on the behavior of their workplace superiors.
  • “Subtle interventions” such as the display of moral symbols do not appear to lead to backlash or retaliation against the employees using them.

            Practical Advice for the Employee

It’s not a good idea to fill your office with Bibles and statues of saints in order to announce that you are an ethical person. Unless you work for a church group, that would be so far out of the norm that you might be seen as a nut, which would certainly interfere with your social acceptance.

It’s preferable to use an approach that is minimal or subliminal in the context of your own workplace. Here are some possible approaches, each likely to do a better job than a garlic necklace:

  • – A religious item such as a cross, an “ethical” item such as a photo of Mahatma Ghandi or an ethical quotation, visible in your work area.
  • – A tag line as part of your e-mail signature that references morality. One used by the researchers, which they considered to be low-key in a business setting, was “Better to fail with honor than succeed by fraud”.
  • – An item of clothing that announces that you care about “right living.” This might be a religious pin or necklace, an ID bracelet commemorating someone, or an “awareness ribbon” associated with an ethical cause.

Thus this research study into business ethics not only helps us understand supervisor-subordinate dynamics, but offers practical advice to be happy at your job! That in turn can help you achieve a more satisfying meaning of life.

We see that ethics research is far from useless. It can protect you from a vampire boss, and it can save your life in a vehicle accident. (As a social scientist might say, “Take that, you smug physical scientist!”)

Image Credits:
– From openclipart.org: Car accident sign, jofrè; Little Frankenstein Driver, Merlin2525; Angelic Smiley, GDJ
– Adapted from openclipart.org: Yellow Convertible sports car, netalloy; Halo, themidnyteryder83
– Goya, The Sleep of Reason Produces Monsters, public domain

Other References:
1 http://science.sciencemag.org/content/sci/352/6293/1573.full.pdf
2 http://science.sciencemag.org/content/sci/352/6293/1514.full.pdf
3 http://amj.aom.org/content/early/2016/02/19/amj.2015.0008.abstract
4 http://www.sreedharidesai.com/yahoo_site_admin/assets/docs/AMJ-2015-0008final.47234439.pdf

Comments

Ethics Research & Vampire Destroyers — 5 Comments

  1. More self-driving ethics: IEEE (the world’s largest engineering society) has just published an editorial (http://spectrum.ieee.org/cars-that-think/transportation/self-driving/tesla-autopilot-crash-why-we-should-worry-about-a-single-death) by a philosophy professor that discusses the ethics of Tesla’s self-driving cars, and the recent fatal crash in which the autopilot failed to see a very large truck. The editorial quotes Elon Musk, the head of Tesla, as he dismissed questions about why the company had not considered that crash to be “material information” just before issuing shares of stock. Musk’s argument amounts to this: if everyone drove Teslas, many lives would be saved, therefore worrying about a single life is irrelevant. Prof Lin rejects this concept, stating instead that “ethics is more than math” and that even a single death is significant if we can learn lessons from it that will make autopilots safer for everyone. Musk’s attitude is truly problematical, since his company has recently affirmed that it will not use LIDAR(http://spectrum.ieee.org/cars-that-think/transportation/self-driving/tesla-again-spurns-lidar-betting-instead-on-radar), which other car companies use, and which would have prevented the accident in question.

    • Mac, that is an interesting and enigmatic quote, since both ethics and virtue are socially dependent. In Peter Kreeft’s context, I take it to mean that studying ethics in the abstract is a useless waste of time unless we apply its lessons to live a virtuous or worthy life. Pure thoughts are worthless, but pure actions have great value.

  2. Thanks, Art … I’ve been tracking self-driving car technology for a while now, but I realize I hadn’t given enough thought to the “ethics” issues you are raising. Indeed, Asimov handled many of these kind of decisions in his various novels and stories about autonomous robots (perhaps his novels should be required reading in any modern ethics class).

    And though your comments are absolutely valid, about whether you as a buyer/driver are going to be “happy” with the decisions your robotic chauffeur could make, there is a hidden tradeoff here that could affect that assessment. It’s easiest to make my point in an example.

    Suppose your vehicle traveling at 70 mph is suddenly diverted from its intended path (perhaps another car strikes you without warning, or a tire suddenly decompresses in such a way that the vehicle path is altered so it now points at a crowd of pedestrians). Suppose that if you flick the steering wheel to the left you will strike other cars and probably cause a multi-vehicle accident, and if you flick it to the right you will drive your car into a brick wall, yet if you “freeze” you will strike the crowd. If you are the driver, you have to make an impossible split-second decision without the time to consider alternatives or consequences. Almost all of us will respond by instinct alone, without considering who will be affected or how.

    But if we had more information and more time, we might make different decisions. For example, the statistics of being able to survive an accident given modern car technology are well-known and are the result of many crash-tests. It’s possible that a new-model car might offer a 90% or better survival rate for all occupants in a head-on crash if air bags fully deploy, seat belts hold, and if the car crumple-zones behave as designed. But if the crowd at risk consists of a group of children on a school outing, the survival rate for any of them struck at 70 mph might be almost zero. However, few of us would be capable of making that kind of tradeoff if given less than one second to react … fear and self-survival would probably take over for many people, not to mention the set of drivers who would simply freeze and continue on course without actually making a decision.

    A robotic driver has the luxury of not involving emotions, and of having much more time as well as information (is there a car slightly behind you and to your left?) to make any decision, plus the ability to take into account ALL likely outcomes. But there is another factor … to the degree that ALL the cars in your immediate vicinity are also robotic drivers, then some measure of communication is possible between cars, and the cars to your immediate left might miraculously open a gap for you without putting any other drivers into peril, and allowing you to “escape” from the ethical dilemma without having to choose between yourself and the others at risk. Further, by being able to respond essentially instantaneously, those options might be possible where humans would not be able to respond in time even if there were an escape route.

    Your point is valid, that if your personal value system doesn’t match the car’s ethical design then it’s possible that the outcome in any given situation might not “fit” your own moral system. But it’s worth taking into account that many humans wouldn’t instinctively react to fast-moving situations by their OWN ethical system, given time to think about and weigh properly the various tradeoffs, plus the fact that a robotic intelligence might be able to handle some situations better than a human being … allowing ethical dilemmas to be avoided altogether.

    • Excellent commentary, Charles, thank you!

      I agree that in a fast-moving accident, a robot driver would be able to take account of many additional factors, and then carry out the car owner’s ethics more reliably than the owner himself could. And as you point out, sometimes mutually interacting robots could save more lives. But that also means that we not only need to program autonomous vehicles with the ability to make value judgments, we need to tell them how to interact with other robots and with human drivers with whom they will share the road, so as to achieve some overall best moral outcome.

      Juries and judges are willing to forgive human drivers many actions taken under stress. But they, and the public, will not cut a machine so much slack. They will demand that machines be near-infallible, purer than pure, and more ethical than their owners.