Ethics is the topic of two fascinating recent research studies. One of these studies will protect you from an evil vampire boss! The other will make you hesitate before buying or even getting into a self-driving car.
But first: Ethics is a squishy term. What do we mean by ethics, anyway?
– In philosophy, ethics references how people ought to live, pursuing right rather than wrong; it’s wrapped up in the Meaning of Life as discussed in an earlier blog.
– In business, ethics means behavior that protects the reputation of the business; in practice, ethics means not only obeying the law but also conforming to the social norms of the moment.
– In everyday life, ethics refers to proper behavior according to a particular standard, whether it’s one preached by a religious group or one’s personal opinion.
The meaning of ethics therefore depends on its context: the speaker, the situation and planned actions. When the term is so variably defined, how can self-respecting scientists legitimately perform research on ethics?
Amazingly, social scientists have found scientifically sound ways to study ethics. They are not trying to “prove” what is ethical (which sounds rather close to “proving” religious faith), but they are discovering how people feel about ethical situations.
Bring on the science! We’ll summarize two examples, one involving autonomous (self-driving) vehicles, and one involving employees who need protection from evil, unethical bosses.
Let’s start with autonomous vehicles. Many people long for the day that their car will drive itself to their destination while they snooze, catch up with e-mail or watch a video. And most of us understandably are principally concerned with the question, will my family and I be safe when I take my hands off the wheel?
However, that’s not the only thing that should make us concerned. Consider this: a self-driving car is basically a car equipped with a robot chauffeur, a mechanical creature. The robot has:
– External Sensors: usually a video camera; a conventional radar; a laser scanner with range measurement (LIDAR); temperature; ambient light.
– Internal Sensors: occupant sensing; occupant controls; temperature.
– Actuators: steering; braking; acceleration; lights; horn; air bags; occupant displays; occupant “nudges” (haptics); and emergency calls to 911.
– Computer: the electronics that takes sensor inputs, applies logical rules and then sends signals to various actuators.
The analogy with a living creature is obvious: we sense the world outside us as well as within: our internal twinges and stomach rumbles; we think about what to do; and then we do it. And when we think about what to do, we take account of the “rules of the road” and our personal ethics.
The driverless car has to be programmed with rules. Some of them are pretty obvious: if the vehicle is speeding toward a slower object, apply the brakes! But some of them, surprisingly, involve ethics – for example, what should the car do when it can’t stop in time, and wherever it steers, it’s going to kill someone?
Ethics of Driverless Cars
Self-driving ethics were studied by a team of researchers at the University of Toulouse, France, the University of Oregon and MIT. Their work was published in Science Magazine. Science also posted a re-statement of the article with the lurid subtitle “When should your car be willing to kill you?”
The researchers conducted six online surveys involving almost 2000 U.S. residents between June and November 2015. They used the Amazon Mechanical Turk (MTurk) crowdsourcing service, paying each participant 25 cents for their response. Each survey included an “attention test” which weeded out the 10% of respondents who blindly answered without even reading the questions.
Each of the six surveys dealt with how a self-driving car should prioritize the lives of its occupants versus the lives of other parties, typically pedestrians. Here’s a shortened version of the questions that were asked:
– Should a self-driving car sacrifice one occupant rather than kill ten pedestrians?
– What about other numbers of pedestrians (from 1 to 100)? What if there’s a family member in the car with you?
– Would you buy a self-driving car that was willing to kill you to save many pedestrians?
– Given some options of rules for the robot driver, are they moral? Would you like to see these rules in self-driving cars? Would you buy a car with these rules?
– Should self-driving cars be legally required to sacrifice their driver to save many other lives?
– For different kinds of robot rules, would you buy such a car?
If you have a cynical view of people’s morality then, I’m sorry to say, your beliefs would be reinforced by the results found by this research. Participants strongly believed that other people’s cars should be programmed so that in an emergency situation, they would maneuver so as to save as many lives as possible. However, most participants would not buy such a car: they would only buy a car that was programmed to place top priority on saving the lives of those riding in the car.
The researchers call this a “social dilemma”: everyone is tempted to get a free ride, rather than following rules that would give the best overall outcome.
They point out that if governments were to mandate “utilitarian” algorithms, in which cars would try to save the largest number of lives, many people would not choose to buy self-driving cars. This would delay the broad use of such cars and therefore many lives would not be saved. It’s possible that more lives would be saved by allowing “selfish” algorithms that save the driver first. Although this would kill innocent people in the rare instance that the car had to make an emergency decision, overall more lives would be saved by reducing human-caused accidents.
The ethics of robots has occasioned much discussion, particularly since Isaac Asimov’s famous 1942 Three Laws of Robotics. Despite that dialogue, today we have no general agreement on the rules of robot ethics, nor a reliable way to program those ethics into an autonomous device.
Readers of this blog are smart enough to imagine ways that self-driving technology might evolve that could lead to outcomes with reasonably good morality and life-saving. The key point, and one which the researchers also made, is that the development of autonomous vehicles needs to involve considerable dialogue and discussion with the public, who will be its users or its victims depending on circumstances. And until you are personally satisfied with the ethics of your robotic chauffeur, you might want to think twice before trusting your life to him-her-it.
The Sleazy Boss
Here’s our other great example of ethics research.
Researchers have investigated workplace morality, in which a supervisor makes unethical, immoral, illegal or downright sleazy requests of a subordinate. Such a demand might involve a white lie about the manager’s absence from work, a claimed expense for which no receipt is produced, or a sexual advance.
Do these things happen? Yes, indeed: almost ten percent of workers in a 2013 survey reported feeling pressured at work to compromise their personal code of conduct.
When this happens, what choices do you, as an employee, have?
– You could complain to your supervisor’s boss, which is likely to lead to retaliation, firing, or at least the reputation that you are a troublemaker;
– You could go along with the boss, which might cause you loss of self-respect, shame, stress and elevated blood pressure;
– As a last resort, you could quit the job and work elsewhere.
None of these are attractive options!
Moral Symbols as a Garlic Necklace
Business school faculty at UNC and Northwestern University noted that once the loathsome request has been made, the employee is in a fix. So they asked, is it possible to ward off unethical demands in advance, before the boss makes them? Their published study bears the provocative title “Moral Symbols: A Necklace of Garlic against Unethical Requests.” (Abstract; Full Article) By thus alluding to a folkloric protection against vampires, the researchers are labeling the evil supervisor as a vampire prepared to suck the ethical blood from his powerless, enslaved employees.
The researchers note that in the workplace, people automatically make inferences about the others around them based on social clues: how they dress, how they talk and what they say, how neat or cluttered their workspace is, and visible personal possessions such as photos, books and décor. Thus it’s possible for an employee to signal to his supervisor that he is an ethical person, using small cues that the supervisor may not even register consciously.
How do these cues protect you from unethical requests or demands? Quoting related studies, the researchers propose these factors:
– The presence of a moral cue activates the supervisor’s own awareness of right and wrong, in effect causing them to think twice about what they are doing;
– People have an intuitive desire not to defile symbols of purity; a supervisor may be willing to make an immoral request, but will be less willing to make that request of a person who appears to be moral because it would be “doubly wrong”;
– A boss who makes an unethical request of an employee whom they believe to be ethical will unconsciously feel moral accountability to the employee, feeling concern that the employee will judge them unfavorably;
– The supervisor will fear that the employee may disobey the request and report it to other authorities, even at the risk of retaliation.
The combination of these factors serves to discourage the boss from unethical requests, or at least pushes the boss to make those requests of someone else other than you!
Does appearing to be moral really ward off evil requests? Based on six studies that the researchers conducted, yes indeed! Five studies each involved between 68 and 210 U.S. participants; some were college students and in the case of two studies, some were people recruited by MTurk. The sixth study used 104 supervisor-subordinate pairs in India.
The full article is 58 pages long and I’m not going to try to summarize each study. However, here are some conclusions that might be drawn from the article:
– Exposure to a moral symbol increases moral awareness and inhibits unethical behavior;
– A leader with one or more “moral” followers will tend to act more ethically;
– A person displaying a moral symbol is perceived to be of high moral character, and is less likely to be asked to engage in unethical behavior;
– Subordinates are not powerless, but can exert a significant influence on the behavior of their workplace superiors;
– “Subtle interventions” such as the display of moral symbols were not found to lead to backlash or retaliation against the employees using them.
It’s not a good idea to fill your office with Bibles and statues of saints in order to announce that you are an ethical person. Unless you work for a church group, that would be so far out of the norm that you would be seen as a nut, which would certainly interfere with your social acceptance. It’s preferable to use an approach that is minimal or subliminal in the context of your own workplace. Here are some possible approaches, each likely to do a better job than a garlic necklace:
– A religious item such as a cross, an “ethical” item such as a photo of Mahatma Ghandi or an ethical quotation, visible in your work area;
– A tag line as part of your e-mail signature that references morality; one used by the researchers, which they considered to be low-key in a business setting, was “Better to fail with honor than succeed by fraud”;
– An item of clothing that announces that you care about “right living,” such as a religious pin or necklace, an ID bracelet commemorating someone, or an “awareness ribbon” associated with an ethical cause.
Thus this research study into business ethics not only helps us understand supervisor-subordinate dynamics, but offers practical advice for being happier at your job!
We see that ethics research is far from useless: it can save your life in a vehicle accident and it can protect you from a vampire boss. As a social scientist might say, “take that, you crass physical scientist!”
– From openclipart.org: Car accident sign, jofrè; Little Frankenstein Driver, Merlin2525; Angelic Smiley, GDJ
– Adapted from openclipart.org: Yellow Convertible sports car, netalloy; Halo, themidnyteryder83
– Goya, The Sleep of Reason Produces Monsters, public domain