Moral Judgment as Categorization (MJAC) - Explanation and Development

Author

Cillian McHugh

Published

August 2, 2020

In what follows I will attempt to provide an overview of my recently published theoretical work, covering the main ideas behind the theory, while also describing the process of its development, from initial conception to final publication.


(presentation to IMP seminar series 31st May 2021)



(Poster presentation to SPSP 2022)

Background

I never intended to develop a new theory of moral judgement, it just sort of happened. When I started my PhD I wanted to understand how people make moral judgements, and when I studied the available approaches they didn’t quite provide the answers I was looking for. For example Prinz (2005) over-stated the role of emotion in moral judgements: ‘Emotions, I will suggest are perceptions of our bodily states. To recognise the moral value of an event is, thus, to perceive the perturbation that it causes’ (Prinz 2005, 99). Haidt (2001) draws on analogy rather than providing an account for the underlying cognitive mechanisms, e.g., describing intuitions as phonemes (Haidt 2001, 827), ‘brain has a kind of gauge’, a ‘like-ometer’ (Haidt and Björklund 2008, 187). Greene (2008) distinguishes between cognition and emotion, though these are not well defined. The Theory of Dyadic Morality (Gray, Young, and Waytz 2012; Schein and Gray 2018) provides an account that is based on the content of moral judgements, rather than the underlying cognitive mechanisms.

So having failed to find the answers I was looking for in the morality literature, I broadened my reading, looking at the emotion literature, and research on learning and knowledge acquisition. This led me to the works of Lisa Feldman Barrett, and Lawrence Barsalou (Barrett, Wilson-Mendenhall, and Barsalou 2014; Barsalou 2003). Initially I thought this provided a potential framework for understanding the role of emotions in moral judgement, however when I attempted to apply this framework, it became apparent that it also provided a framework for understanding moral judgment more generally. This realization came from an extensive survey of both the categorization and morality literatures. Barsalou (2003) defended his theory of categorization by highlighting the dynamic and context dependent nature of categorization, the same kinds of variability and context dependency were present in the morality literature, these parallels are shown in Table 1.

Table 1: Parallels between Morality and Categorization
Note:
*At the time of development we had only hypothesized that typicality in moral judgments would be observed, but more recent work has confirmed this hypothesis (Gray and Keeney 2015; Schein and Gray 2018).
** While not directly referred to as dumbfounding, a similiar phenomenon has been observed in non-moral contexts (Boyd 1989, 1991; Keil 1989; Keil, Rozenblit, and Mills 2004).
Phenomenon Categorization Morality
Variability Interpersonal
Intrapersonal
Context Culture
Social
Development
Emotion
Framing
Order/recency
Other Skill
Typicality ✓*
Dumbfounding ✓**

Moral Categorization

The idea behind Moral Judgement as Categorization (MJAC - pronounced em-jack), is that making moral judgements involves the same cognitive processes as categorization more generally. There are three core premises and two core predictions of the approach as follows:

Premises

  • Making of a moral judgment is a process of categorizing something as morally right or morally wrong (or indeed not morally relevant).

  • The process of categorization involved in the making of a moral judgment is a domain-general one (not unique or specific to the moral domain).

  • Moral categorization occurs as part of ongoing goal-directed behavior and thus is highly dynamic and sensitive to a range of contextual influences.

Core Predictions

  • Stability emerges through continued and consistent repetition and rehearsal

  • Robustness emerges through consistency across multiple contexts

According to MJAC, what others describe as moral ‘intuitions’ (e.g., Haidt 2001) are categorizations that have become highly skilled or automatic, through practice.

Understanding MJAC

There are two examples that help to illustrate the assumptions of the categorization processes that MJAC is grounded in.

Things to Pack Into a Suitcase

Consider the formation of the category Things to Pack Into a Suitcase (Barsalou 1991). Items that fall into this category (toothbrush, spare clothes, etc.) are not generally categorized as such on a day-to-day basis. The category emerges as required: when a person needs to pack things into a suitcase. A person who travels frequently will be able to form the category things to pack into a suitcase more readily because of repetition and the emerging skill. Barsalou (2003) argued that categorization more generally occurs through the same process. What we think of as ‘Stable’ categories, are categorizations that have become habitualized or skilled, as part of goal-directed activity.

Figure 1: Things to Pack Into a Suitcase

Fruit

To illustrate how our every-day categorizations are shaped by goal-directed activity, consider the category Fruit. Typical members of the category include items such as apples, and oranges. Fruit is defined as containing the seeds, and by this definition, items such as tomato also fall into this category. However, we do not generally interact with tomatoes in the same way as we interact with other Fruit, so while it is defined as Fruit, we generally don’t think of it as Fruit.

Figure 2: Fruit

We extend this basic process to moral categories. When people encounter a behavior in certain circumstances, they may learn that it is morally wrong, and this behavior becomes associated with the category morally wrong. Each subsequent time this behavior is encountered in a context in which its moral value is relevant or it is identified as a member of the category morally wrong (either explicitly or implicitly), the person’s skill in deploying this category is strengthened. This same process holds for morally right. With the increasing frequency of such categorizations, they become increasingly habitual and automatic (see Barsalou 2003).

Applying MJAC

This approach to understanding moral judgements provides a novel perspective on how we understand particular moral phenomena.

Moral Dumbfounding

Moral dumbfounding occurs when people defend a moral judgment even though they cannot provide a reason to support it (Haidt, Björklund, and Murphy 2000; Haidt 2001; McHugh et al. 2017, 2020). Moral dumbfounding is most frequently obeserved for harmless taboo behaviors (consensual incest, cannibalism involving a body that is already dead). The taboo nature of these topics means that they are consistently identified as morally wrong without much discussion [the Scottish public petitions committee notably dismissed a call to legalize incest with no discussion at all; see Sim (2016)]. This leads to a high degree of stability in categorizing them as wrong. However, the taboo nature of these behaviors prevents them from being discussed. This means that a typical encounter with such behavior involves little more than identifying it as wrong, possibly with an expression of disgust, and changing the subject. Because of this combination of stability and This process logically leads to moral dumbfounding.

Categorizing people versus categorizing actions

MJAC predicts people’s judgements will focus on the actor or on the action depending on the situation. Consider the following two scenarios:

  1. You find out that a colleague has been fired for stealing from your employer—they have been bringing home office equipment for their own personal use, and they have been exaggerating their expense claims.
  1. A close friend of yours reveals to you that they have been stealing from their employer—they have been bringing home office equipment for their own personal use, and they have been exaggerating their expense claims.

MJAC predicts that people will be more lenient in their judgments of the person in the second scenario than the in first scenario. Indeed this is consistent with what is found in the literature (Heiphetz and Craig 2020; Forbes 2018; Lee and Holyoak 2020; Hofmann et al. 2014; Weidman et al. 2020).

A further prediction is that for the second scenario, people will focus on the action rather than the actor. People are motivated to see close others positively (Forbes 2018; Murray, Holmes, and Griffin 1996a, 1996b). If faced with a situation in which a close other did something wrong, people would try to avoid making a negative judgment of the person (Ditto, Pizarro, and Tannenbaum 2009; Murray, Holmes, and Griffin 1996a, 1996b). One way to avoid this is to focus on the action rather than the actor. Relatedly, for favorable judgments, we expect the opposite effect. If a close other does something praiseworthy, people are likely to focus on the actor rather than the action, helping to maintain a positive view of the close other (Forbes 2018; Murray, Holmes, and Griffin 1996a, 1996b).

A key goal of moral categorization is to distinguish ‘good’ people from ‘bad’ people, to help us navigate the social world, and effectively guide our social interactions. Learning about people’s moral character or moral ‘essence’, enables us to establish relationships with ‘good’ people, and to limit our interactions with ‘bad’ people (or at least treat interactions with ‘bad’ people with caution). This means that for strangers, we are likely to show a bias for categorizing the actor rather than the action (Uhlmann, Pizarro, and Diermeier 2015; Dunlea and Heiphetz 2020; Siegel, Crockett, and Dolan 2017; Siegel et al. 2018).

Contrasting MJAC with Existing Approaches

Perhaps the most important difference between MJAC and existing approaches is that the focus of MJAC is on the cognitive processes, rather than on the content of moral judgements. According to the dominant dual-process approaches, different types of moral judgements are grounded in different types of cognitive processes. Deontological (principled or rule based) moral judgements are grounded in intuitive/automatic/emotional/model-free processes, while utilitarian or consequentialist judgements (where the aim is to maximise positive outcomes), are grounded in deliberative/controlled/cognitive/model-based processes. These different processes mean that our judgements are susceptible to specific kinds of contexual influences, e.g., how persona/impersonal an action is (Greene 2008, 2016), the relative amount of emotionality (Byrd and Conway 2019; Conway et al. 2018; Conway and Gawronski 2013; Goldstein-Greenwood et al. 2020), or whether the focus is on the action or the outcome (Cushman 2013; Crockett 2013). An overview of these approaches is displayed in Figure 3. Despite these important insights, there are a range of other context effects known to influence moral judgements that are not accounted for by these models. These other context effects are detailed in the green boxes in Figure 3.1

Figure 3: A sketch of dual-process approaches (contextual influences not directly addressed by these approaches highlighted in green)

MJAC does not make assumptions based on the content of moral judgements. However, MJAC predicts a distinction between habitual (or skilled, or intuitive) responses, and deliberative (or controlled) responses. This distinction does not make assumptions about specific content of moral judgments. However, thinking about the contexts in which deontological vs utilitarian judgements are generally made, it makes sense that deontological rules might become more habitual (think It’s wrong to hurt people, Thou shalt not kill, You shouldn’t hit your sister), while utilitarian judgements may require more deliberation (e.g., how should we divide these resources in the fairest manner?). MJAC therefore integrates the insights of dual-process accounts, while also allowing for greater variability, and a more diverse range of context effects. Figure 4 outlines the various influences on moral judgement according to MJAC.

Figure 4: Influences on moral judgements according to MJAC

These differences in assumptions between MJAC and other approaches lead to differences in explanations and predictions. Above I have outlined moral dumbfounding as an example of such an explanation. The differences in assumptions and explanations are listed in Table 2. To avoid making this post too long and drawn out, I will not go into detail on these differences, however I point you to the relevant section in the main article for more detailed discussion on this.

Table 2: Contrasting MJAC with other approaches
Greene’s Dual-process theory “Soft” dual-process theory Model-based accounts TDM MJAC
Assumptions:
Content Deontology-utilitarianism / personal-impersonal Deontology-utilitarianism Action-outcome Harm-based, dyadic Dynamical Context-dependent Goal-directed
Moral “Essence” (Implicit) (Not discussed) (Implicit) Explicit Rejected
Processes Dual-processes Dual-processes Dual-processes (implicitly dual-process) Continuum
Mechanisms Intuition (emotion) / cognition Emotion / cognition Model-based / model-free Categorization (unspecified) Type-token interpretation
Phenomena Explained:
Dumbfounding (harmless wrongs) (Not discussed) (Not discussed) Explained Denied Explained: learning history
Wrongless harms (Not discussed) (Not discussed) (Not discussed) Denied Explained: learning history
Typicality (Not discussed) (Not discussed) (Not discussed) Matching of “prototype” Context-dependent
Contextual influences Specific: Personal-impersonal Specific: Emotion / cognition Specific: Action-outcome Specific: Harm-based

In the opening sections I outlined two general predictions of MJAC. We have also identified various specific predictions (e.g., the categorizing of actors vs actions described above). For brevity I do not go into detail on these specific predictions, but point you to the main article for this more detailed discussion (here is probably a good place to start).

Developing and Publishing MJAC

The road from initial conception, to publication of MJAC was quite long. The initial idea was formed by the Autumn of 2014. I presented this at my upgrade panel for my PhD in the Spring of 2015, and at a morality Summer School in Rotterdam in August 2015. It soon became apparent that the ideas behind MJAC were too broad to form the basis of a PhD, so it was shelved for a while, in favour of something more concrete. My PhD thesis ended up focusing on Moral Dumbfounding (though I was able to include an overview of MJAC in a final Epilogue chapter). I graduated from my PhD in 2018, and that winter I revisited MJAC to try and get it published.

Rejections

By July 2019 it was ready for submission. It received a fairly quick desk reject from the first outlet.

It went to review in the second journal we tried. I’ve had some difficult rejections, and rejections that I disagreed with, but this one was really rough. Reviewer 1 really did not like the idea. The review from Reviewer 1 contained some of the harshest review comments I have seen. A few excerpts from the (very long) review are below.

The authors compile a great deal of research findings. They argue that moral judgment is about categorization, a position that flies in the face of and does not account for the decades of research on moral judgment in developmental and educational psychology.

The paper is incoherent in narrative, inadequate and misleading in explanation, and overall does not advance the field.

The paper seems to bounce from one thing to another without a clear, coherent story. Perhaps their thin version of moral judgment is a type of categorization, one based on perceiver making the other into a dead object. But so what? What does it buy the field? How does it contribute to scholarship?

The experience with Reviewer 1 was a bit of a blow to the confidence. We brought it to our lab and made some fairly minor changes before sending it out again. Just before the Christmas break in 2019 I submitted it to Perspectives on Psychological Science (fully expecting to receive another reject). In February 2020 I went to SPSP in New Orleans, where I also was due to present MJAC as a poster at the Justice and Morality pre-conference. Upon landing I checked my email, and was very surprised to have received an R&R.

Revisions

The reviews were really fair, but extensive. The original framing relied heavily on the parallels provided in Table 1 above. The reviewers were not convinced by this argument. We were encouraged to clearly differentiate MJAC from existing approaches, with instructions to identify and better engage with specific dual-process accounts, and with the Theory of Dyadic Morality (Schein and Gray 2018).

So the revisions were really quite tough. I approached them systematically, addressing each comment as well as was possible, and documenting how the changes made addressed each comment. Many of the comments addressed fairly deep conceptual questions, requiring extensive reading before I could begin attempting to address them. And, naturally, I shared my progress on social media:

I was also able to call on social media for help in addressing specific issues that came up while doing the revisions. The replies came in quickly, and really provided excellent resources to help with the revisions.

Following weeks of work, we had finally addressed all the reviewer comments. I was sick of it at this stage, and ready to submit. Unfortunately, the extent of the revisions meant that the manuscript was too long to submit. I noted that there is technically no word limit for submissions at Perspectives on Psychological Science, but my co-authors wisely convinced me to ask for an extension so we could cut down the words to something more manageable.

So we spent another few weeks cutting words, and restructuring sections to be more streamlined (huge credit to coauthors in this endeavour). And eventually we submitted the revised manuscript. It was unrecognisable from the original submission.

We submitted the revised version in June 2020, and received a decision of Conditional Accept (with minor revisions) in September. It was fully accepted in November 2020, and published online in July 2021 (almost 7 years after the original idea was formed)

Key Points in the Revisions

I think one of the most important changes that came from the review process was the clarity in the argument. The original submission simply presented the approach, but we didn’t articulate a clear problem the approach was solving. This was a tough question to address. The range of existing approaches available means that trying to go into detail on the relative strengths and weaknesses is not feasible. In contrast, a more general approach risks over-generalizing and potentially mis-representing some approaches. As the revisions progressed this core argument presented itself. We identified the variability and context dependency of moral judgements as a challenge that is not well addressed in the existing literature. Because dynamism and context dependency are core assumptions of MJAC, this means that MJAC is well positioned to address this challenge.

References

References

Barrett, Lisa Feldman, Christine D. Wilson-Mendenhall, and Lawrence W. Barsalou. 2014. “A Psychological Construction Account of Emotion Regulation and Dysregulation: The Role of Situated Conceptualizations.” In Handbook of Emotion Regulation, edited by James J. Gross, 447–65. New York: Guilford Press.
Barsalou, Lawrence W. 1991. “Deriving Categories to Achieve Goals.” In The Psychology of Learning and Motivation: Advances in Research and Theory, edited by Gordon H. Bower, 27:76–121. San Diego: Academic Press.
———. 2003. “Situated Simulation in the Human Conceptual System.” Language and Cognitive Processes 18 (5-6): 513–62. https://doi.org/10.1080/01690960344000026.
Boyd, Richard. 1989. “What Realism Implies and What It Does Not.” Dialectica 43 (1-2): 5–29. https://doi.org/10.1111/j.1746-8361.1989.tb00928.x.
———. 1991. “Realism, Anti-Foundationalism and the Enthusiasm for Natural Kinds.” Philosophical Studies 61 (1-2): 127–48. https://doi.org/10.1007/BF00385837.
Byrd, Nick, and Paul Conway. 2019. “Not All Who Ponder Count Costs: Arithmetic Reflection Predicts Utilitarian Tendencies, but Logical Reflection Predicts Both Deontological and Utilitarian Tendencies.” Cognition 192 (November): 103995. https://doi.org/10.1016/j.cognition.2019.06.007.
Cameron, C. Daryl, B. Keith Payne, and John M. Doris. 2013. “Morality in High Definition: Emotion Differentiation Calibrates the Influence of Incidental Disgust on Moral Judgments.” Journal of Experimental Social Psychology 49 (4): 719–25. https://doi.org/10.1016/j.jesp.2013.02.014.
Christensen, Julia F., Albert Flexas, Margareta Calabrese, Nadine K. Gut, and Antoni Gomila. 2014. “Moral Judgment Reloaded: A Moral Dilemma Validation Study.” Frontiers in Psychology 5: 1–18. https://doi.org/10.3389/fpsyg.2014.00607.
Christensen, Julia F., and A. Gomila. 2012. “Moral Dilemmas in Cognitive Neuroscience of Moral Decision-Making: A Principled Review.” Neuroscience & Biobehavioral Reviews 36 (4): 1249–64. https://doi.org/10.1016/j.neubiorev.2012.02.008.
Conway, Paul, and Bertram Gawronski. 2013. “Deontological and Utilitarian Inclinations in Moral Decision Making: A Process Dissociation Approach.” Journal of Personality and Social Psychology 104 (2): 216–35. https://doi.org/10.1037/a0031021.
Conway, Paul, Jacob Goldstein-Greenwood, David Polacek, and Joshua D. Greene. 2018. “Sacrificial Utilitarian Judgments Do Reflect Concern for the Greater Good: Clarification via Process Dissociation and the Judgments of Philosophers.” Cognition 179 (October): 241–65. https://doi.org/10.1016/j.cognition.2018.04.018.
Crockett, Molly J. 2013. “Models of Morality.” Trends in Cognitive Sciences 17 (8): 363–66. https://doi.org/10.1016/j.tics.2013.06.005.
Cushman, Fiery A. 2013. “Action, Outcome, and Value A Dual-System Framework for Morality.” Personality and Social Psychology Review 17 (3): 273–92. https://doi.org/10.1177/1088868313495594.
Ditto, Peter H., David A. Pizarro, and David Tannenbaum. 2009. “Motivated Moral Reasoning.” In Psychology of Learning and Motivation, edited by Brian H. Ross, 50:307–38. Academic Press. https://doi.org/10.1016/S0079-7421(08)00410-6.
Dunlea, James P., and Larisa A. Heiphetz. 2020. “Children’s and Adults’ Understanding of Punishment and the Criminal Justice System.” Journal of Experimental Social Psychology 87 (March): 103913. https://doi.org/10.1016/j.jesp.2019.103913.
Everett, Jim A. C., Nadira S. Faber, Julian Savulescu, and Molly J. Crockett. 2018. “The Costs of Being Consequentialist: Social Inference from Instrumental Harm and Impartial Beneficence.” Journal of Experimental Social Psychology 79 (November): 200–216. https://doi.org/10.1016/j.jesp.2018.07.004.
Everett, Jim A. C., David A. Pizarro, and Molly J. Crockett. 2016. “Inference of Trustworthiness from Intuitive Moral Judgments.” Journal of Experimental Psychology: General 145 (6): 772–87. https://doi.org/10.1037/xge0000165.
Forbes, Rachel Chubak. 2018. “When the Ones We Love Misbehave: Exploring Moral Processes in Intimate Bonds.” Master of {{Arts}}, Toronto: University of Toronto.
Goldstein-Greenwood, Jacob, Paul Conway, Amy Summerville, and Brielle N. Johnson. 2020. “(How) Do You Regret Killing One to Save Five? Affective and Cognitive Regret Differ After Utilitarian and Deontological Decisions.” Personality and Social Psychology Bulletin 46 (9): 1303–17. https://doi.org/10.1177/0146167219897662.
Gray, Kurt James, and Jonathan E. Keeney. 2015. “Impure or Just Weird? Scenario Sampling Bias Raises Questions About the Foundation of Morality.” Social Psychological and Personality Science 6 (8): 859–68. https://doi.org/10.1177/1948550615592241.
Gray, Kurt James, Liane Young, and Adam Waytz. 2012. “Mind Perception Is the Essence of Morality.” Psychological Inquiry 23 (2): 101–24. https://doi.org/10.1080/1047840X.2012.651387.
Greene, Joshua David. 2008. “The Secret Joke of Kant’s Soul.” In Moral Psychology Volume 3: The Neurosciences of Morality: Emotion, Brain Disorders, and Development, 35–79. Cambridge (Mass.): the MIT press.
———. 2016. “Why Cognitive (Neuro) Science Matters for Ethics.” In Moral Brains: The Neuroscience of Morality, edited by S. Matthew Liao, 119–49. Oxford University Press.
Greene, Joshua David, R B Sommerville, L E Nystrom, J M Darley, and J D Cohen. 2001. “An fMRI Investigation of Emotional Engagement in Moral Judgment.” Science (New York, N.Y.) 293 (5537): 2105–8. https://doi.org/10.1126/science.1062872.
Haidt, Jonathan. 2001. “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.” Psychological Review 108 (4): 814–34. https://doi.org/10.1037/0033-295X.108.4.814.
Haidt, Jonathan, and Fredrik Björklund. 2008. “Social Intuitionists Answer Six Questions about Moral Psychology.” In Moral Psychology Volume 2, The Cognitive Science of Morality: Intuition and Diversity, edited by Walter Sinnott-Armstrong, 181–217. London: MIT.
Haidt, Jonathan, Fredrik Björklund, and Scott Murphy. 2000. “Moral Dumbfounding: When Intuition Finds No Reason.” Unpublished Manuscript, University of Virginia.
Heiphetz, Larisa A., and Maureen A. Craig. 2020. “Dehumanization and Perceptions of Immoral Intergroup Behavior.” Oxford Studies in Experimental Philosophy. https://doi.org/10.7916/d8-em68-fp60.
Hofmann, Wilhelm, Daniel C. Wisneski, Mark J. Brandt, and Linda J. Skitka. 2014. “Morality in Everyday Life.” Science 345 (6202): 1340–43. https://doi.org/10.1126/science.1251560.
Keil, Frank C. 1989. Concepts, Kinds, and Cognitive Development. Vol. xv. The MIT Press Series in Learning, Development, and Conceptual Change. Cambridge, MA, US: The MIT Press.
Keil, Frank C., Leonid Rozenblit, and Candice Mills. 2004. “What Lies Beneath? Understanding the Limits of Understanding.” In Thinking and Seeing: Visual Metacognition in Adults and Children, edited by D. T. Levin, 227–49. MIT Press.
Lee, Junho, and Keith J. Holyoak. 2020. But He’s My Brother’: The Impact of Family Obligation on Moral Judgments and Decisions.” Memory & Cognition 48 (1): 158–70. https://doi.org/10.3758/s13421-019-00969-7.
McHugh, Cillian, Marek McGann, Eric R. Igou, and Elaine L. Kinsella. 2017. “Searching for Moral Dumbfounding: Identifying Measurable Indicators of Moral Dumbfounding.” Collabra: Psychology 3 (1): 1–24. https://doi.org/10.1525/collabra.79.
———. 2020. “Reasons or Rationalizations: The Role of Principles in the Moral Dumbfounding Paradigm.” Journal of Behavioral Decision Making 33 (3): 376–92. https://doi.org/10.1002/bdm.2167.
Mikhail, John. 2000. “Rawls’ Linguistic Analogy: A Study of the ’Generative GrammarModel of Moral Theory Described by John Rawls in ’A Theory of Justice.’ (Phd Dissertation, Cornell University, 2000).” {{SSRN Scholarly Paper}}, Rochester, NY: Social Science Research Network.
Murray, Sandra L, John G Holmes, and Dale W Griffin. 1996a. “The Benefits of Positive Illusions: Idealization and the Construction of Satisfaction in Close Relationships.” Journal of Personality and Social Psychology 70: 79–98.
———. 1996b. “The Self-Fulfilling Nature of Positive Illusions in Romantic Relationships: Love Is Not Blind, but Prescient.” Journal of Personality and Social Psychology 71 (6): 1155–80.
Prinz, Jesse J. 2005. “Passionate Thoughts: The Emotional Embodiment of Moral Concepts.” In Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking, edited by Diane Pecher and Rolf A. Zwaan, 93–114. Cambridge University Press.
Schein, Chelsea, and Kurt James Gray. 2018. “The Theory of Dyadic Morality: Reinventing Moral Judgment by Redefining Harm.” Personality and Social Psychology Review 22 (1): 32–70. https://doi.org/10.1177/1088868317698288.
Siegel, Jenifer Z., Molly J. Crockett, and Raymond J. Dolan. 2017. “Inferences about Moral Character Moderate the Impact of Consequences on Blame and Praise.” Cognition, Moral Learning, 167 (October): 201–11. https://doi.org/10.1016/j.cognition.2017.05.004.
Siegel, Jenifer Z., Christoph Mathys, Robb B. Rutledge, and Molly J. Crockett. 2018. “Beliefs about Bad People Are Volatile.” Nature Human Behaviour 2 (10): 750–56. https://doi.org/10.1038/s41562-018-0425-1.
Sim, Philip. 2016. MSPs Throw Out Incest Petition.” BBC News, January.
Uhlmann, Eric Luis, David A. Pizarro, and Daniel Diermeier. 2015. “A Person-Centered Approach to Moral Judgment.” Perspectives on Psychological Science 10 (1): 72–81. https://doi.org/10.1177/1745691614556679.
Valdesolo, Piercarlo, and David DeSteno. 2006. “Manipulations of Emotional Context Shape Moral Judgment.” Psychological Science 17 (6): 476–77. https://doi.org/10.1111/j.1467-9280.2006.01731.x.
Weidman, Aaron C., Walter J. Sowden, Martha K. Berg, and Ethan Kross. 2020. “Punish or Protect? How Close Relationships Shape Responses to Moral Violations.” Personality and Social Psychology Bulletin 46 (5): 693–708. https://doi.org/10.1177/0146167219873485.
Whedon, Joss. 1997. “Lie to Me.” Buffy the Vampire Slayer. United States: The WB.
Wiegmann, Alex, Yasmina Okan, and Jonas Nagel. 2012. “Order Effects in Moral Judgment.” Philosophical Psychology 25 (6): 813–36. https://doi.org/10.1080/09515089.2011.631995.

Footnotes

  1. The reader is referred to the wealth of literature examining such factors as emotional influences, Cameron, Payne, and Doris (2013); intentionality, evitability, benefit recipient, Christensen et al. (2014); Christensen and Gomila (2012); action-outcome distinction Crockett (2013); cushman_action_2013; trustworthiness and social evaluation Everett, Pizarro, and Crockett (2016); Everett et al. (2018); personal-impersonal distinction, Greene et al. (2001); doctrine of double effect, Mikhail (2000); level of physical contact, Valdesolo and DeSteno (2006); order effects, Wiegmann, Okan, and Nagel (2012)↩︎