Consider Liu and Ditto's demonstration of moral coherence and Thomas, Stanford, and Sarnecka's demonstration of the influence of moral judgment on judgments of danger to children left alone. Explain how these can each be understood as instances of cognitive dissonance.
Cognitive dissonance (broadly: the psychological pressure to reduce inconsistency among one’s attitudes/beliefs/feelings) predicts that people will change beliefs, attitudes, or perceptions so that their mental elements “fit” better together. Both papers document cases where people alter descriptive beliefs (about consequences or danger) in ways that reduce conflict with prescriptive or evaluative beliefs (moral judgments), which is exactly the kind of motivated change cognitive-dissonance accounts predict.
Liu & Ditto. They show people who judge an act to be inherently immoral also tend to believe it will have worse practical consequences (lower benefits, higher costs), and that experimentally shifting deontological evaluation (via purely moral essays) shifts perceived costs/benefits even when no factual evidence is given. Interpreted as dissonance reduction: if one’s gut says “this act is wrong,” there is pressure to have facts line up (so the act also looks practically bad), and vice versa — so people reshape factual beliefs to avoid an unresolved dilemma.
Liu&Ditto 2012 copy
Stanford, Sarnecka & Thomas. They show people estimate higher danger for a child when the parent chooses to leave the child (especially for blameworthy reasons) than when the same objective circumstances occur accidentally. People are inflating perceived danger to justify moral condemnation of the parent (or conversely condemning the parent and then inflating danger). That reciprocal inflation is exactly the sort of motivated adjustment cognitive-dissonance theory predicts.
Explain the phenomenon that Liu and Ditto describe as "moral coherence".
“Moral coherence” (their term) is the empirical pattern and psychological tendency for people to align their descriptive factual beliefs about consequences with their prescriptive moral evaluations. Concretely:
If someone judges an action as inherently immoral (a deontological stance), they tend to judge the action as less likely to produce benefits and more likely to produce harms.
Conversely, people who see an action as morally acceptable tend to believe it will have favorable consequences.
Moral coherence arises from motivated reasoning and coherence-seeking: people generate or accept factual beliefs that make their moral stance look rational, reducing internal conflict between “is” and “ought.” Liu & Ditto document this correlationally (Studies 1–2) and causally (Study 3) and tie it to moral conviction and self-reported informedness.
How did Thomas, Stanford, and Sarnecka show that subjects' moral judgments were influencing their judgments about danger?
They ran multiple experiments in which participants read vignettes where objective risk was held constant but the parent’s mental state / reason / intentionality for leaving the child was varied (e.g., accidental absence because parent was hit vs. deliberate absence to go to work, volunteer, relax, or meet an illicit lover). Participants then judged (a) how morally wrong the parent’s action was and (b) how much danger the child was in.
Key empirical findings:
Participants rated children as more dangerous when the parent intentionally left them compared to when the same circumstances arose accidentally.
The most blameworthy reasons (e.g., meeting an illicit lover) produced higher danger estimates than neutral/obligatory reasons (e.g., going to work).
When participants were asked to make explicit moral-wrongness ratings alongside danger ratings, danger estimates increased — making the moral judgment explicit amplified the perceived danger.
These patterns indicate moral judgments were driving (inflating) danger judgments, rather than reflecting true differences in objective risk.
What are the beliefs that subjects might be having difficulty fitting together in these scenarios?
Two classes of beliefs are in tension:
A. Normative/moral beliefs: “Leaving a child alone (or doing X) is morally wrong / deplorable” or “Pushing the stranger is inherently wrong.”
B. Descriptive/factual beliefs: “Doing X will have such-and-such consequences” — e.g., “the child was actually in little objective danger,” or “pushing the stranger would very likely save lives” / “capital punishment deters crime.”
The tension arises when a person’s moral condemnation of an act (A) points one way, while an honest cost–benefit estimate (B) points the other (e.g., they believe the act would produce good outcomes). That inconsistency is psychologically uncomfortable, producing motivated moves to resolve it.
Interpret the results of the study described in Stanford, Sarnecka, and Thomas as an instance of such moral coherence.
Their results fit the moral-coherence pattern:
People’s moral disapproval of a parent’s action (e.g., choosing to leave a child to meet an illicit lover) correlates with inflated estimates of danger for the child in that same scenario.
When people are invited to make moral judgments explicitly alongside risk estimates, danger estimates increase even more — moral evaluation and factual belief pulling each other into alignment.
Thus the paper shows moral condemnation (prescriptive) shaping perceived risk (descriptive) in order to achieve a coherent, blame-justifying belief set — the same bidirectional coherence Liu & Ditto describe.
What feature of their experimental design was crucial for showing this?
Two crucial design features:
Holding objective circumstances constant while varying moral/intentional features. The vignettes kept actual safety-relevant facts identical (e.g., same parking spot, same short absence, same environmental conditions) and only changed whether the absence was accidental vs. deliberate and the reason for leaving. That isolates moral blame as the manipulated factor while leaving objective risk unchanged — the strongest test of moral influence on perceived danger.
Eliciting both moral-wrongness and danger judgments (and sometimes making moral judgment explicit). Collecting both measures (and manipulating whether moral judgments are made salient) allowed them to show that higher moral condemnation correlates with, and when made salient increases, inflated danger estimates — evidence of causal influence or at least of tight causal coupling.
Do these beliefs actually contradict one another?
Strictly logically: not necessarily. Moral and factual claims are different categories (ought vs is), so there need be no formal contradiction. Example: “It is morally wrong to leave a child alone” and “Children left briefly in locked cars are at low objective risk” can both be true without logical inconsistency.
But psychologically, they function as inconsistent for many people because consequentialist reasoning treats moral rightness as partly a function of outcomes. When people implicitly hold both a deontological intuition (some acts are wrong regardless of consequences) and, simultaneously, commonsense consequentialist heuristics (acts that produce harm are wrong), the two can feel like they contradict. That felt contradiction is what motivates coherence-seeking adjustments. Liu & Ditto emphasize that even “deontological” judgments are often buttressed by later consequentialist beliefs — so the perceived contradiction is real in ordinary cognition, even if not strict logic.
What do Liu and Ditto conclude from Study 3, and why would Studies 1 and 2 alone be unable to justify that conclusion?
Their results fit the moral-coherence pattern:
People’s moral disapproval of a parent’s action (e.g., choosing to leave a child to meet an illicit lover) correlates with inflated estimates of danger for the child in that same scenario.
When people are invited to make moral judgments explicitly alongside risk estimates, danger estimates increase even more — moral evaluation and factual belief pulling each other into alignment.
Thus the paper shows moral condemnation (prescriptive) shaping perceived risk (descriptive) in order to achieve a coherent, blame-justifying belief set — the same bidirectional coherence Liu & Ditto describe.
Stanford, P. K., Sarnecka, B. W…
What do Liu & Ditto conclude from Study 3, and why would Studies 1 & 2 alone be unable to justify that conclusion?
Conclusion from Study 3: Changing people’s deontological moral evaluation (via short essays that argued for or against the inherent morality of capital punishment but said nothing about consequences) produced corresponding changes in people’s factual beliefs about capital punishment’s benefits and costs. In other words, moral evaluation causally influences descriptive belief about consequences. They report that changes in moral beliefs partially mediated essay effects on cost–benefit beliefs.
Liu&Ditto 2012 copy
Why Studies 1 & 2 alone were insufficient: Studies 1 and 2 were correlational: they showed a strong association (people who judged an act deontologically immoral also tended to rate its outcomes as worse) but could not establish direction of causation — it could be that factual beliefs drove moral judgments (a straightforward consequentialist inference), or that a third variable drove both. Study 3 provides the crucial causal test: manipulate only moral framing and observe fact-belief change. That experimental manipulation is what allows Liu & Ditto to argue that moral evaluations can shape factual beliefs (not only the reverse).
According to these authors, why and how do judgments of danger and judgments of moral wrongness each serve to inflate the other, and how is this escalating feedback loop supposed to explain how attitudes have changed so much in only a single generation?
Mechanism of mutual inflation: Moral condemnation gives people a motive to view the action as harmful (to justify condemnation); seeing an action as harmful increases moral outrage and perceived blameworthiness. These two effects feed back: moral outrage → inflated danger estimate → greater perceived harm → more moral outrage. This reciprocal amplification is exactly what Stanford et al. describe: people don’t just infer danger causes moral condemnation, they also infer moral condemnation causes danger perceptions.
Stanford, P. K., Sarnecka, B. W…
Why this can produce rapid attitude change: The authors suggest cultural changes (media amplification of rare child-harm tragedies, changing social norms about “good” parenting) seed an initial shift in moral evaluation (leaving children alone becomes more morally frowned upon). Once the moral evaluation shifts, people start inflating perceived risks to justify the new moral stance. Those inflated risk perceptions are then visible to others (through conversation, policing, reporting), generating more outrage and normalizing the stricter standard — a positive feedback loop that can magnify a small initial change into a large cultural shift within a single generation. Media salience of rare but vivid child harm stories accelerates the loop by making risk seem higher than it is, feeding moral condemnation and so on.
What does this show us about the kind of "fitting together" that we seem to be trying to achieve?
It shows people pursue explanatory/motivational coherence rather than strict philosophical consistency. Cognitive dynamics models (parallel constraint satisfaction / explanatory coherence) predict beliefs, feelings, and judgments get adjusted jointly toward a configuration that maximizes internal fit. The fitting together is pragmatic and affect-laden:
People prefer that their moral evaluation and their beliefs about consequences mutually support one another (so one can say “I’m right and the facts back me up”).
The process is bi-directional: moral evaluations can warp fact judgments, and fact beliefs can be used to justify moral stances.
The goal is not truth per se but psychological consonance and defensibility (especially when convictions are strong or one believes oneself informed). Liu & Ditto explicitly frame the effect within explanatory-coherence and motivated-reasoning concepts.
Do Liu and Ditto's results suggest that becoming better informed about an issue will relax the demand for moral coherence among the resulting beliefs?
No — the data point the other way. Liu & Ditto find that self-reported feeling of being informed (and moral conviction) increases the coordination between moral evaluation and factual beliefs. That is, people who say they’re more informed show stronger moral–factual alignment (consistent with other literature showing political knowledge can amplify partisan motivated reasoning). So their results suggest greater (perceived) informedness does not relax the demand for coherence; it often strengthens motivated fitting of facts to morals.
If they are right, explain why we should or should not trust the intuitions of police officers, prosecutors, or judges regarding when a child left alone is in danger and how much?
Short answer: Be cautious. Both empirical papers show systematic, non-truth-tracking influence of moral evaluation on factual risk judgments. That implies:
Risk of bias: Unreflective intuitions (including those of laypeople and officials) about danger can be contaminated by moral blame. If someone feels moral outrage about the parent, they are likely to overestimate objective risk. Stanford et al. warn that bystanders, police, prosecutors and judges may be imposing moral judgments on what should be an evidence-based risk assessment.
Stanford, P. K., Sarnecka, B. W…
Practical implication: We should not rely solely on unaided intuition. Where possible, decisions that carry legal consequences for parents should be informed by objective indicators of risk (contextual features that actually predict harm: age of child, environmental hazards, duration and conditions of absence, child’s capacity and preparation, availability of phone/backup, local crime statistics, weather, etc.), standardized risk-assessment protocols, and training that explicitly separates moral blame from empirical risk assessment.
But don’t throw the baby out with the bathwater: Experienced professionals may have useful domain knowledge (patterns of neglect or real risk indicators) that novices lack. The key is procedural safeguards: make risk judgments transparent, require evidence, use checklists or objective thresholds when legal penalties are at stake, and be wary when moral outrage is intense. Liu & Ditto’s and Stanford et al.’s results suggest training and institutional rules that force separation of moral evaluation and empirical assessment would reduce wrongful sanctions based on inflated danger impressions.
If Liu and Ditto are right, why are genuine moral dilemmas comparatively rare?
Because of motivated coherence processes: when faced with an apparent dilemma (deontological intuition vs consequentialist calculus), people commonly resolve the dissonance by changing one set of beliefs so the conflict disappears — most often by bringing facts and evaluations into alignment (e.g., downgrading perceived benefits of an act one already feels is wrong). This psychological pressure to produce a coherent belief network means people rarely remain in a prolonged state of unresolved dilemma; the felt conflict is resolved via post-hoc adjustments. So what looks like a rare, principled “I accept the moral cost anyway” stance is uncommon because it requires resisting the cognitive pull to recruit supporting facts. Liu & Ditto argue that coherence-seeking explains why principled moral dissent (accepting a bad outcome for the sake of principle) is infrequent.