Jump to content
Objectivism Online Forum

Metamorality: Critiquing Joshua Green


Eiuol

Recommended Posts

Metamorality: Critiquing Joshua Green

       Finally, in 2025, the first team of astronauts to begin colonizing space, colloquially known as the Myrmidons, are hours away from landing on Mars. This groundbreaking is a moment. But, there is little expectation for success, despite the great success of establishing a multi-national colonization agreement and person mission consisting of the US, China, and several members of the newly formed Space Exploration Treaty Organization (SETO). The technology is top notch, the astronauts all passed their examinations of mental and physical health, boredom is countered by boosted communication relays, and only people younger than 30 were selected as astronauts. Several months into the mission, cameras broadcasting events on the ship showed different problems. One member of the crew, Sergeant Tsung, became sick with flu-like symptoms, apparently his immune system did not respond to one of the vaccines. Conflict began when few agreed to Captain Apprilio’s extra distribution of food sources to Tsung, hoping the nutrition would help him recover sooner. Some crew members felt a vote by majority was appropriate, while Aprillio and a few others believed the captain should make unilateral decisions. Others thought everyone should get extra resources as long as Tsung did. Protocol existed for severe illness, but not all members of the crew followed it rigidly – this was a special circumstance. While the astronauts are all part of modern industrial cultures, no two people had the same moral beliefs, whether due to upbringing or extensive investigation of moral philosophy. The result of what should’ve been a minor issue was splitting off into evolving subgroups and alliances, with a growing sense of tribal divisions. Perhaps when they land, the tribes of the ship will limp along, hardly the shining achievement of man that billions of people expected.

       Would this hypothetical scenario ever happen? Likely, but avoiding the fate of the Myrmidons won’t happen simply through better rules and protocol. What can be done? Joshua Greene in his book “Moral Tribes” presents a case for what he calls metamorality. His project largely built from the premise that morality’s function - as established through interpretation of psychological research shown in his book - is to solve the tragedy of the commons problem. Because of this social-oriented function, there is still the issue of different moral tribes. The tragedy of the moral commons arises; while morality may solve local, intragroup social/moral conflict, it does not solve larger-scale intergroup social conflict. Metamorality is his proposed solution, which manages conflict according to a common conceptual[1] currency: happiness. Ultimately, through argument and discussion, his case rests on one principle[2]: maximize the happiness of all people, even oneself, impartially. With that basis, as his theory goes, inter-moral conflicts will be able to be resolved without violence, propaganda, or coercion. My interest in this paper is to establish that Greene fails to make a compelling case for his metamoral principle and thus his entire project. To be sure, in all 352 pages of “Moral Tribes”, Greene has many premises, principles, and examples, so since I’m focusing only on the Greenian principle, I’ll work to even improve his arguments lest I miss one key sentence. Yet, as I think I’ll find when I explore further, there remains a pervasive, essential error that no amount or quality of rhetoric[3] is able to avoid. Formulating a better metamoral principle could help a mission to Mars succeed.

The Greenian Metamoral Principle

       “Maximize happiness impartially” – this is the Greenian metamoral principle, which Greene characterizes as utilitarianism. The lofty goal and hopefully result of applying the principle on a societal level is “a system for transcending incompatible visions of moral truth (200 [All citations without an author refer to Moral Tribes.] )”. One issue is immediately apparent: exporting such a principle into foreign (or global) territory is just room for a new brand of inter-moral conflict. Admittedly, the scope of this paper is insufficient to answer whether there really will be a reduction or at least steady management of inter-moral conflict[4]. However, I am able to analyze if the principle makes any sense given existing psychological evidence and philosophical reasoning. To do that, I will critique the arguments Greene provides for integrating each concept into the Greenian metamoral principle, improving the arguments where possible. Although I will be very critical of Greene, at the outset here, I’ll say that I think his concept of metamorality has a lot of potential for growth and change. The question here, though, is only if this specific Greenian metamoral principle has holes in it preventing passage into calmer seas[5].

Maximization

       The maximize part of the Greenian principle comes from the human manual mode, which is, by nature, a device for maximizing.  To stay on target of what Greene means by manual mode, I’ll stick to definitions and discussions he provides in earlier works of his on dual-mode theory. The rough idea, according to Greene, is that manual mode consists of “cognitive” representations that are inherently neutral representations which do not automatically trigger particular behavioral representations or dispositions. Manual mode, then, is the means to accomplish the function of cognitive representations. These functions are primarily observed in the dorsolateral surfaces of the prefrontal cortex and parietal lobes. In contrast, by its nature, automatic mode thinking is non-uniform, characteristically efficient but inflexible. Automatic mode consists of “emotional” representations that have automatic effects, and are therefore behaviorally valanced. Emotion in this context does not concern itself with mood, it is “concerned with emotions subserved by processes that in addition to being valenced, are quick and automatic, though not necessarily conscious”. Automatic mode, then, is the means to accomplish the function of emotional representations. These emotional representations tend to be associated with the amygdala and the medial surfaces of the frontal and parietal lobes. (Greene, 2007; 203).

       Strictly from a psychological perspective, automatic thinking is not an unusual claim, so in the interest of sticking to a conceptual argument, I’ll accept the dual-modes theory to the degree that not all modes of thinking are deliberative. Anything non-deliberative is going to be missing an essential feature of deliberation: volitional, step-wise reasoning where representations can be recombined[6] as the situational demands vary. At the very least, both deliberative and non-deliberative (manual and automatic) modes exist, which is all Greene needs to trace out maximization.

       A potential leak here: why favor maximization and the corresponding manual mode? After all, as Rousseu believed, compassion, not reason, is what leads people to do good[7]. Counter-argument to Rousseau is simple. As non-deliberative, automatic settings are going to at least contain an assemblage of moral and other emotions. No emotion has universal triggers as it is, even if all people have a universal capacity for an emotion such as compassion, so the automatic settings will not be consistent among people or even groups of people (194); a metamorality wouldn’t be able to function when not all people share identical experiences. Metamorality needs to be a system accessible to members of all tribes, not just a system of favoritism towards a particular tribe’s moral emotions. Otherwise, we’re back to asking which tribe is “best”, the whole conflict we’re trying to get around. Even supposing that Rousseau had a perfect explanation for the natural state of man rather than merely the French tribe, “sentimentality” is not going to achieve inter-moral conflict resolution without being maximized by the manual mode.

       Greene claims that his brand of utilitarianism “wrestles moral philosophy away from the automatic settings and the limitations of biological and cultural history and turned it over to the brain’s general-purpose problem-solving system (200).” More specifically, this is achieved with manual mode thinking via maximization. The problem here is that manual mode doesn’t begin with a moral philosophy, but, according to Greene, “it can create one if it's seeded with two universally accessible moral values: happiness and impartiality (203)”. Stated differently to avoid questions about happiness and impartiality right now, a moral philosophy must be created with and seeded by prior values. But Greene leaves me wondering how a moral philosophy can be seeded, so it’s worth exploring this point further. By the deliberative nature of manual mode, content of some sort is required to do anything, in the same sense that an inherent cognitive skeletal framework (which any reasonable cognitive theory presumes to exist) requires content. There is little reason to doubt, then, that manual mode is able to be seeded by values through the mechanisms of that framework. Epistemological standards – standards of knowledge - are present too, since I need to establish how I explicitly know what a value even is. This claim isn’t to say that no values are present as automatic settings, but that manual mode must identify specific values from which to create an explicit moral philosophy.

       Whether there are non-moral values is arguable – some values will perhaps be more “emotional” than others in the same sense there are deontological or consequentialist ways to answer trolley problems. My position is to make no distinction between moral/nonmoral values; all values are moral values because they are ends which at least provide an IF-THEN normative standard of chosen behavior. If I want to figure out the depth of snow outside, then I should stick a ruler into the snow. Once values provide content and receive explicit attention[8], manual mode is able to apply deliberation to those values. The values are then the seeds of a moral philosophy if they are worked into a moral code of seeking and maximizing the seeds. This process doesn’t need to use any specific set of values, but to create metamorality requires a minimal set of values. An Athenian may value wisdom, philosophy, happiness, and impartiality. A Spartan may value strength, power, happiness, and impartiality. Though they don’t have identical values in their moral basis, they share happiness and impartiality. Greene says metamorality requires happiness and impartiality, so those are his proposed minimal set of values. Since their validity as universally accessible moral values is discussed later, the point here is any non-Rousseauan moral philosophy needs to be seeded by values and can only do so if the skeletal framework has content to build from.

The Temporal Context of Manual Mode

       Greene claims that “manual mode gives us the flexibility … to do what best serves our long-term goals (195)”. Manual mode just may give us the flexibility to do what best serves our long-term goals. But why any focus on a point in time? I may ask about Epicureanism focused on right now; who needs long-term goals, anyway, if today is great? I’ll plan enough to eat comfortably and be satisfied with life as it is. In other words, Greene needs to say more on the point of long-term goals. Animals are fine at surviving well without explicit goals of manual mode. Bees are able to navigate from hive to flower and back with a primitive brain – if you can even call it that. If anything, since a bee is able to migrate in a complex manner, manual mode seems to be an epiphenomenon of human consciousness where it serves no function for behavior; manual mode would be only a ghost riding in a machine, impotent to change the machine’s course[9]. After all, a bee doesn’t need manual mode to act, so why should a human? Without also saying bees have a manual mode, it’s hard to say manual mode is good for anything except armchair philosophizing. Epiphenomenalism is plainly absurd, but serving long-term goals is not necessarily part of manual mode if automatic settings do just as well. That is, even if Rousseau is wrong about metamorality, he may still be right about the power of compassion over reason.

       Without any account of why maximization should be for long-term goals, “long-term” can be replaced with any modifier and remain just as vague. To be sure, Greene talks about how manual mode works for long-term goals, he just doesn’t demonstrate why long-term goals are the type of goals to use maximization for. So, long-term goals don’t seem to be anything more than preferences. Even worse, since Greene is not developing a theory of morality, he can’t prescribe any type of goal. As much as manual mode helps to make an explicit moral philosophy, that doesn’t mean the explicit philosophy is any more than myth, an ad-hoc rationalization of automatic settings. To get around these problems, maximization needs to apply to all goals, and add flexibility to all problem solving. Then, temporal context won’t matter for what he prescribes.  

The Function of Manual Mode

       Greene already takes good care to explain which features of manual mode help to achieve goals:

o   Has sets of standing goals and possible actions

o   Elaborate models of how the world works

o   Solving complex and novel problems

o   Understanding of causal relationship between possible actions and consequences

o   Realizing goal states

(195, 197)

       However, problems remain for integrating these ideas into maximization. The issue isn’t that Greene is wrong about anything per se, it’s that he doesn’t capture the extent in which manual mode is better than automatic settings for maximization. If automatic settings can provide effective maximization, there’s reason to suspect that appealing to automatic settings is a reasonable course of action. Resistance to automatic settings through manual mode may just as much lead to inefficiency or misery through political systems. In fact, bees show co-operation in their dances without a manual mode, and other abilities Greene attributes to manual mode. As I said earlier, a bee achieves a goal of navigating from hive to flower and back. Bees also have a model of how the world works, albeit limited by their cognitive capacity. A bee will even refuse to visit a flower if another bee indicates that the flower is where a lake is – it takes more than mere perception to take into account that flowers don’t grow in lakes (Gallistel, 1990). With these abilities, it may seem like manual mode is superfluous. Fortunately, Greene’s explanation of how manual mode operates is good foundation to complete the connection between manual mode and serving all goals.

       That manual mode is a potent means of action is not worth debating in the context of this paper. Greene just needs some features that makes manual mode uniquely valuable. Deliberation is a unique feature, but the bigger question is what about deliberation and its serial, step-wise nature of thinking makes for advantages that not even bees have. On the long-term side, the evidence are grand achievements like Aristotle’s complete works that took a lifetime, or even projects that take more than a generation to complete such as the Parthenon. This level of planning isn’t going to occur with an automatic setting to the extent that so much information cannot be grasped in a single gulp[10]. Ultimately, the process is split into smaller steps that are spread over months and years; columns are constructed to support the roof that is so far only an idea. Yet these steps remain connected in a constructed, serial organization[11]. On the short-term side, planned actions are still needed for enjoying sensuous pleasures like a feast with fine wine and extravagant cakes. Experiencing the feast may be automatic enough at sight of all the food, but that says nothing about creating the food. Regardless of the feast’s immediate pleasure, there is more going on than the immediate pleasure, and more than even making a representation of food and feasts as bees do with their environment while navigating to flowers. It’s one thing to discover a feast that smells great and begin eating right away, but it’s another to know what kind of actions create the environment where a feast is possible. Knowledge[12] of the world is needed to bake a cake or ferment wine and that producing these in advance leads to a pleasurable feast. It’s arguable that individual step of these goals are representations the result of automatic settings, but even in that case, representations need to be connected and recombined as a series of representations.

       The common theme here is that goals unique to humans - whether they are making ice cream, establishing a multi-billion dollar corporation, or writing an essay – need to be divided into numerous subgoals. Holding the entire process in mind at once is computationally implausible. With manual mode connecting each goal in a serial way makes accomplishing these goals not only plausible, but sensible[13]. Better yet, manual mode is necessary for a sufficiently complex goal.

       Why, though, would a short-term thinker at least care about metamorality, insisting that looking too far ahead will be disappointing? Epicureanism could be correct thus carpe diem is a great way to live. Still, this lifestyle needs manual mode and maximization. Keeping in mind the need of goals, maintaining such a hedonistic state depends also on resolving metamoral conflicts in the present. The Epicurean still needs to maintain his co-existence to achieve his immediate pleasures. Maximization of whatever leads to co-existence would be necessary. An anti-social Epicurean or an isolated monk doesn’t need to worry about co-existence, but metamorality is a social concept and does not apply to non-social situations. An Epicurean would more than likely acknowledge all sorts of pleasures that other people bring, so he needs some metamorality. If even a short-term thinker needs metamorality, then maximization at least in the social domain covers enough moral codes to be an inter-moral principle.  

Values for Maximization

       Greene recognizes that the “human brain can take values that originate with automatic settings and translate them into motivational states that are susceptible to the influence of explicit reasoning and quantitative manipulation (202).” Unfortunately, he is unclear about if all values originate with automatic settings, or if only some values originate from them. This has to be made clearer, otherwise I’d return to the earlier issue: the variability of emotional triggers across people and groups for automatic mode thinking. Emotions are not “bad” but automatic settings are poor foundation for metamorality. To select some kind of root values may require a non-manual selection (e.g. happiness), but more complex values (e.g. Japanese street fashion) would need some epistemological standard of manual mode thought within the process. Manual mode deliberation about fashion and Japanese culture, and using a method of deliberation with epistemological standards, may lead to valuing Japanese street fashion. That value, as an experience, may well arise spontaneously with automatic mode, without saying the value originates with automatic settings – manual mode sets the stage for valuing. Wherever values are determined to originate in terms of psychological science, the philosophical conclusion is that manual mode is able to alter values.

       Although I strengthen or add more defense points to maximization as a concept for metamorality, reason/manual-mode thought needs specific ends to be optimal, and can’t be optimal in and of itself. Greene phrases the same idea succinctly: “optimal for whom (199)?“ A Rationalist of Kantian inclinations may say adding interests of anyone to any extent only takes away from the moral validity – morality is wholly disinterested[14]. This can’t lead to a metamorality that actively resolves inter-moral conflicts. Actually, since manual mode helps establish goals, maximization needs to be for someone or some event that is embodied[15]. Here I make the link between maximization and happiness increasingly apparent. Still, Greene has a case for maximization connected to metamorality that I think is largely correct. The next section will explore happiness more deeply.

Happiness

       In the Greenian metamoral principle, “the happiness part comes from reflecting on what really matters to us as individuals (203)”. Greene does not suggest that such a statement is the only argument needed – it’s closer to declaring a mathematical theorem then proceeding to prove the theorem. After all, while reflection on what really matters may lead to valuation of happiness, what “really matters” could just as well lead to “nothing really matters”. A nihilist in the strong sense of believing that values don’t really exist may just as well believe what “really matters” is tendencies to arbitrarily prefer actions, so what “really matters” is wholly subjective. Disagreement is not limited to a nihilistic basis either – a Buddhist would claim that nirvana is what matters, not a “materialistic happiness”. In fact, the sense Greene talks about happiness is explicitly not a Buddhist’s goal[16]. Anger, sadness, or any emotion can be equally plausible for what matters.

Intrinsic Value

       Greene moves towards demonstrating that happiness is what matters most by arguing that it is “one of the primary things that are valued intrinsically [… and] near universally (203)”. Indeed, happiness as one of the things that are valued intrinsically and nearly universally can get around contradictory “matterings”. This argument seems appealing due to its simplicity, but it’s good only on a superficial level. “Intrinsically” implies no particular conscious valuer, as it is valued in itself without deliberated evaluation –the value exists as a value regardless of my intentional state. Thus to value intrinsically is to automatically value. This conflicts with maximizing happiness. How can happiness be maximized if happiness is valued automatically by the automatic mode? There would be no way to maximize happiness on a metamoral level if an intrinsic value is automatic. Happiness would probably be culturally subjective.

       Separate from these specific issues of intrinsicism, Greene reaches for the universality of happiness when he says “everyone gets that [happiness] matters (202)“. “Getting” that happiness matters is comprehension that happiness matters to at least some people. To understand that it matters to some people requires at least consideration of mental states, which does not always coincide with ever being in a given mental state. I can comprehend how it felt for Orpheus to resist looking at his wife Eurydice while leading her out of the Underworld, but I don’t know how it feels. That is, a mental state is by nature abstract, private, and immaterial, so the only way to apprehend the state of happiness as “mattering” to people in general is to use manual mode and thus deliberation. This adds a manual mode angle to the discussion on happiness. At the same time, by considering that happiness matters to another person, the state is deprivatized in the sense that one is using accessible, non-private features of a mental state (e.g. movements, expressed desires, etc). Everyone, by virtue of having a manual mode, can complete this thinking process. But it doesn’t follow that everyone will get that happiness matters to themselves, because the whole process isn’t about their own mental state. My mental state is likely involved in judging the mental state of others, which that isn’t to say I am making conclusions about the other’s private mental life. People may “get” that happiness matters, it just doesn’t mean everyone cares.

Happiness is Fundamental

       According to Greene, “everyone can, with a little reflection, see that happiness lies behind many of the other things that we value, if not all of them (192)”. Reflection is the use of manual mode, so it’s safe to say that Greene advocates that deliberation is often, if not always, necessary for anyone to conclude that happiness matters to themselves. Even better, Greene is after a hierarchical relation of values at least to the extent that there are fundamental values or motivators. By playing up this “good” aspect that I see, I can strengthen Greene’s earlier arguments about happiness.

God of the Button

       One big issue remains: why pick happiness as the fundamental to maximize? Even if happiness lies behind many things someone values, it would be a naturalistic fallacy to then say it’s what needs to be maximized for metamorality. Greene presents an intuition pump[17] in several variations to overcome this issue.  

       The intuition pump supposes that I am going to have an accident where I’ll break my kneecap. By pressing a special button, I will avoid breaking my kneecap. My intuition, as is Greene’s, is that anyone would press a button, so it is reasonable to say people prefer to be more happy than less happy. A slight modification to the intuition pump: that pressing the button leads me to avoid breaking my kneecap still, but I’ll also get a mosquito bite. The intuition, again, is to press the button, indicating a preference for more net happiness even with a trade-off. One last modification to demonstrate the importance of happiness is to push the button to save someone that I don’t know from a broken kneecap. As expected, the intuition is to press the button again, indicating a preference for others to be more happy than less happy. (190-191)

            Another modification to the button that Greene discusses is pressing the button to save one person or pressing a button to save ten people. The intuition is to save ten people, indicating a preference to increase more people’s happiness. The final modification is to press a button to spare two people from mosquito bites or press a button to spare one person from a broken kneecap. The resulting intuition to spare the single person indicates a preference for more total happiness across all people. (190-191) Choosing to increase happiness in these scenarios is particularly simple. As a core value, it’s natural to want more happiness, yet it remains an issue if other core values enter into the equation. If the choice was about courage as a core value, it would be natural to want more courage. That includes having a greater number of courageous people.

       Greene says more, but it’s worth playing up the premises that he analyzes little. An apparent issue to begin with is that these situations are all rather contrived as they presume an omnipotent and omniscient power. This is quite alright as long as that presumption is not taken for granted. In such a circumstance of absolute power, happiness can be construed as whatever I want my happiness to be – nirvana, heaven, any possible material desire, feeling like a pig satisfied, etc. Alternatively, happiness can even go with a short-term feeling akin to the pleasure of eating cookies; the intuition pump does not specify the kind of happiness involved. Such open interpretation makes analysis difficult. Various languages, on the other hand, naturally make some distinction. For instance, in Japanese, short-term happiness is ‘ureshii’ (嬉しい) while long-term happiness is ‘shiawase’(幸せ). If this were translated to Japanese, I’d be inclined to use ‘ureshii’ on account of the short-term nature of the corresponding harm; broken knees are temporary, so the resulting happiness is temporary, too. But specifying short-term happiness is no help for mortals, as it actually takes the weight out of maximizing happiness. As gods though, specificity doesn’t matter as much as wanting happiness in the first place, since I know the button will grant whatever my subjective happiness consists of, and there will be no negative externalities.

       Showing that people all desire the same kind of happiness would be easier to demonstrate a metamorality, while speaking of a subjective aspect of happiness reduces the common sense appeal of the Greenian metamoral principle. For Greene’s purposes though, values just need to be “shared widely by members of different tribes whose disagreements we might hope to resolve by appeal to a common value standard (191)“. Happiness fits the bill so far. By being gods with the same knowledge and power with regard to the magic button, focus is brought onto a common value standard, happiness. But I take issue at how translating from god to fallible human might not even result in valuing happiness. That is, the other core values Greene says exist might take precedence to happiness. Suppose that one such value could be courage. Courage at least in an Aristotelian sense is about doing the “right thing” in the face of danger without being rash or cowardly. Greene could respond by saying “the example is bad since courage is driven by happiness” – but some interpretations would say that courage is pursued for its own sake, which would make courage very fitting to be compared to happiness which is also pursued for its own sake, for no further ends. Core values are treated as irreducible, meaning they cannot be broken down further.

       Even if courage is not actually a core value, it does illustrate a problem. One, happiness has no apparent value over courage to the god-like button pusher when the choice is between happiness and non-happiness – courage (or any other conceivable core value!) might matter for regular people who have worldly concerns past the feeling of only happiness. Two, any proposed core value, courage or otherwise, would not and could not be activated by the button due to Greene’s own constraints. It’s fair to say happiness as a core value is almost always going to be chosen over missing out. But if I altered some wording to press a button and grant courage by saving a fellow soldier in battle, or instead cowardly let him die of blood loss, I would press the button.

       Essentially, all these cases of the intuition pump show that having more value is better than having less value. Specifically, while it’s uncontroversial that many people value happiness if there are guarantees of achieving happiness, there may be variations across cultures about what core value to focus on. An ancient Spartan would very well choose happiness over non-happiness, but might just as easily say courage is worth greater focus. Although Greene is looking for a common currency, selecting happiness arbitrarily as what metamorality needs to maximize is like choosing to anchor all global monetary value to the US dollar, yen, or Euro. Perhaps it will work in a practical sense that many people like maximizing happiness, the “global value” will vary over time across people. That is, an Athenian may value happiness, and so would a Spartan, but the two might rate the “importance” of happiness unequally. 

       Greene cannot at this point actually make a solid argument to select happiness as the core value to maximize. To avoid begging the question Greene set out to answer (Why maximize happiness? Because everyone likes happiness. Why does everyone like happiness? Because everyone maximizes happiness), personalizing values as agent-oriented as opposed to intrinsic strengthens the plausibility that a form of happiness is the core value that counts. Otherwise, we’ll forever ask whether the Athenian or Spartan has a better metamoral value standard. 

Only Button-Pressers Need Apply

       Unfortunately, Greene makes a statement that leads me to question his premises. “If you're not willing to lift a finger, then you're not part of the conversation. You're not part of the we (191)“.This argument is practically an ad hominem, ruling out a specific argument without evaluation by labeling the view as, for instance, sociopathic or nihilistic. Greene seems to take the view that anyone unwilling to press the button inherently deserves no consideration. Such a reaction is exactly what Greene is trying to deal with through metamorality: inter-moral conflict resolution. Some reason needs to be provided to actually choose to make someone else happy; manual mode needs to define its moral theory even if happiness is desired in full emotional force. If making others happy is a tribal standard or value, then there may be reason to say nihilism is true – similar to how saying moral standards are tribal provides reasons to say moral relativism is true. And if there is no non-tribal case for values, then no value really “matters”, thus destroying arguments that happiness matters. As far as I see, Greene’s statement amounts to having no argument against nihilism, so the only way he can address it is to ignore the need to present a case for values, especially core values. Value needs to be given an objective and relational basis with regard to the purpose they serve, lest they refer to a tribal myth.

       My quick sketch for a solution: By referencing how other people have some connection to myself, perhaps as a selfish eudemonia, then the looming threat of nihilism is eliminated. The happiness of others unhappiness negatively impacts me indirectly through a slow destruction of the environment via collapsing descriptive norms (Bicchieri, 2009). I am less able to be happy when my environment is horrible. Dictators are a fine example of a poor environment resulting in psychological demise – few would say Stalin was ultimately happy with his paranoid delusions and constant media manipulation, while his five year plan was killing hundreds of thousands (Pipes, 2003). That psychological demise is why unwillingness to press the button is bad. Psychological status is not tribal, so this idea brings out another one, where values revolve around enabling a positive, healthy psychological well-being. I’ll return to these points later and elaborate further for my proposed alterations to Greene’s metamoral principle. Suffice it for now to say that if there is no answer to why unwillingness to press the button is bad, then saying the unwilling people are not part of the we is a naturalistic fallacy.

Maximize Core Values Instead

       The intuition pump cases bring out utilitarian thought where the objective is seeking the most amount of happiness. Evidence for this provided by Greene is that “if all else is equal more happiness for ourselves and others (192)”.  He is correct, then, by saying there is “not a tribe and the world whose members cannot feel the pull of such utilitarian thinking (192)”. I’d even say that there isn’t a person who cannot feel the pull of such utilitarian thinking, in the same way that all people have a manual mode, at least supposing “such” refers to the method of comparing quantities. When having god-like knowledge about how to stop the suffering (the button will stop the pain and suffering) and the consequences of acting, I might as well press if I like happiness[18]. Furthermore, strictly speaking in terms of quantities of happiness, manual-mode is a natural way to maximize happiness, or maximizing any value for that matter. A better formulation, then, is to maximize core values impartially.

Impartiality

       “The ideal of impartially”, in the Greenian metamoral principle, “comes from an intellectual recognition of some kind (203)”. Greene uses another intuition pump to expand on his ideas about impartiality. Ten narrowly selfish[19] people, me included, stumble upon a valuable chest filled with gold coins. No one has an advantage. Immediately, a conflict seems to appear since all the people value its contents. Greene acknowledges that fighting for the contents is dangerous. Or at least, by the constraints provided, there is no reason to think I’m a better fighter, nor do the other narrowly selfish people believe they are superior fighters. With that in mind the only stable solution is when there are no power symmetries, equal division of all the resources is required; “fair distribution of resources naturally emerges (199)”. Greene concludes by saying that this intuition pump “is one way to get at utilitarianism's impartiality (200)”.

       Based off the intuition pump, these are all sensible ideas, except for one: the only stable solution is an equal distribution. Sure, that’s a quick example to come up with, but that might be due to biases learned through American political culture – liberal egalitarianism[20] is common in universities. Fortunately, looking for alternative solutions is simplified by noticing that the situation fits into a game theory framework (Chapter 2 ; Bicchieri, 2010; Bicchieri, 2009). Even better, by being a social situation, I can use an even more specific framework using theories about norms provided by Bicchieri (2006). Descriptive norms and conventions modeled as solutions to coordination games. Games like these capture the structure of situations where there are several possible equilibria, namely if equal distribution is not the only stable solution to sharing gold coins. Descriptive norms and conventions are representable as equilibria of original coordination games since people act in conformity to what they expect others to do (Bicchieri, 2006).Stated formally:

1. Contingency: i knows that a rule R exists and applies to situations of type S;

2. Conditional preference: i prefers to conform to R in situations of type S on the

condition that:

(a) Empirical expectations: i believes that a sufficiently large subset of P conforms to R

in situations of type S;

       Descriptive norms already allow for more solutions that are at least as stable as impartial distribution. Even if “fair” distribution solutions are what Greene likes, the objective is coordination with others on any equilibrium, which is satisfied by more than “fair” distribution of resources.  There are reasons to say other solutions are better in cultures where equal distribution is unfair and distribution by status is viewed as fair. The people may be part of a group where age matters as a norm and where there is a preference to distribute based on age. The rogues don’t need to reject their narrow selfishness – some norms have their effect regardless of conformity. Even if these people are men without a home, norms can have their stabilizing effect to the degree the rogues empathize with each other’s rogue perspective. There are expectations of behavior because everyone recognizes each other as narrowly selfish, as the intuition pump begins with. Each of these examples fit the formal definition of norm, so I have no reason to say fair distribution is really the most stable. So far, Greene’s arguments for impartiality are only oriented towards his tribal beliefs as a liberal.

Another Argument for Impartiality

       Greene is rather quick with using Peter Singer’s argument for impartiality, so he sounds like he is making an argument from authority. The sketch of Singer’s argument (200) he provides is straightforward. Most people don’t care about strangers, while they also appreciate that other people are largely just like them. Eventually, people may make a set of cognitive leaps, culminating in a new conclusion. I’m special to myself, and others are special to themselves. But I’m not really special because I’m not especially special – nothing about my interests is objectively more important than the interests of others.

       There is little effort that Greene takes in strengthening Singer’s words for his context. So, I’ll attempt to integrate the paraphrase into Greene’s “big” argument for impartiality. The beginning and end of Singer’s argument makes sense. Where issues come in, though, is the middle: most people will make a cognitive leap towards realizing that they are not especially special because other people see themselves as special. I’m unclear how much automatic or manual mode is used here, but any “leap” at least involves both. Deliberation is a big part though, considering that explicit beliefs are involved with the argument’s premises and final conclusions. Greene actually says he’s confident that impartiality as a moral ideal is a manual-mode phenomenon (201), so getting at impartiality likewise requires manual mode. If automatic mode took over most of the intermediate process, then there would be favoritism towards a tribe’s moral emotions. This would make Singer’s argument useless for saying that impartiality is an important metamoral principle. As a result, Greene needs to fill in the missing link. On the other hand, filling in the link might not be possible, so I’ll see where his ideas lead.

Impartiality Explained

       The missing link may be, in Greene’s words, that “everyone feels the pull of impartiality as a moral ideal (203)”. Impartiality as a moral ideal is perhaps true, but it would only be true because the meaning is imprecise and takes advantage of an intuitive reaction to the sentence. Phrasing impartiality as a moral ideal without qualification is a statement where impartiality is a moral ideal across all moral contexts. That is, if impartiality is an unqualified moral ideal, that means it’s an ideal in all moral contexts, such as justice, valuing, judgment, etc.  This interpretation seems to be what Greene means because he thus far has not tried to make fine distinctions among moral contexts. Yet there is no good reason for Greene to let ‘moral ideal’ go unexplained, especially since he wants moral emotions to be separate from developing inter-moral agreement. Without more explanation, “pulls” toward a moral ideal of any kind are extremely subjective.

       A better argument would be to begin with being more specific about impartiality. Although the gold coin intuition pump demonstrates attention to impartial solutions to problems, that observation can easily be equivocated with impartiality as a standard of knowledge. For instance, suppose a Spartan teacher says to a class that it is winter in January. An Athenian teacher hears this. Being knowledgeable about astronomy, he knows it is technically true, but corrects the Spartan because it’s too partial for an astronomy class. After all, for facts about the status of reality to be useful, they need to be impartial about that status, lest truth become based on what one wishes to be true[21]. An astronomer specifically studies cosmic bodies regardless of where he happens to be. Other casual contexts, such as talking with a friend over lunch, may justify lack of precision, but “January is winter in the northern hemisphere” is a clarification – it’s exact, thus there is less to be misapprehended. Anyone committed to truth will feel this pull towards impartiality, so at this point in the argument it is a simple step to say everyone concerned with truth knows what the pull of impartiality feels like separate from feelings established by their tribe. Without the precision of impartiality, the Athenian and Spartan could only have their respective tribal standards about astronomy. Using the same structure of reasoning, impartiality looks like it could apply just as much to a moral context.

       Except, it doesn’t. A moral context changes how impartiality applies. One particularly big difference is values. In one sense, truth is a value presupposed by any epistemological argument, but that’s not what I mean here. By value, I mean the tangible objects and abstract goals which are sustained or pursued by an individual agent[22]. Truth can be a value, but my focus is on pursuit and sustenance of values aside from epistemological commitments and beliefs[23]. While the Spartan and Athenian feel a pull towards impartiality in their studies of astronomy – as any good Greek would – their values are more than likely different. When the Spartan values power and war, and the Athenian values philosophy and thought, neither is wholly impartial from their own values. Even an avowed ascetic has to act towards a value with their own mind, no matter how much he wants to act towards no one’s ends at all[24].  Winter in January in the northern hemisphere is not dependent on any person’s mind to be true. Such a fact requires no intentionality (e.g. motivations, goals, “aboutness”). But values like philosophy, war, power, or thought do require intentionality qua values. Power couldn’t be a value to anyone unless it is a value to someone, meaning power as a value is partial to someone’s motivations. Likewise for all values. Even following impartiality requires intentionality in the same way! Furthermore, since morality and values require intentionality, they cannot be impartial. In fact, “moral impartiality can be explained as a critical element of a strategy for choosing sides” (DeScioli and Kurzban, 2013).  

Epistemological Impartiality

       Returning to the gold coins, it would be better to suggest there is a pull towards impartial methods for problem solving, which I call epistemological impartiality. Building a column to a temple requires impartial methods. Building a temple out of clay wouldn’t work instead of marble, even if someone really wants to make a clay temple. But the objective of making the column is partial to someone wanting the column, even if they claim they are impartially doing Athena’s bidding. Similarly, an equal split of gold may provide a pull towards an impartial method, but it is very partial with regard to the objective. Absolute egalitarianism, making all people happy, making oneself happy, avoiding violence, these are all plausible objectives which can only be effectively achieved with impartial methods, just as the Parthenon needed to be made of marbles rather than sand. Each goal is partial to different degrees, with varying commitments to keeping myself out of the picture. And as the earlier discussion of descriptive norms barely touched on, there are other impartial methods than an equal split resulting in still more goal difference that may arise.

       Epistemological impartiality is at least sensible, and is able to serve as the missing link for Greene’s support of Singer’s argument for impartiality. As Greene says, “impartiality is manual mode, everyone can appreciate it (201)”. But a cognitive leap that culminates in a realization that I am not especially special does not seem so realistic anymore. It would require denying intentionality as partial, or ignoring my beliefs as a driver of action[25]. My cognitive leap would be that I am especially special because of my own mental states that only I have direct access to. Insofar as my wishes that I fly as close as I want to the sun with wax wings won’t allow me to avoid the fate of Icarus, there is nothing that makes my beliefs more important than reality itself. But given my intentional states drive my behavior, my interests are more important than the interests of others. In other words, everyone feels a pull of values and their partiality – an inclination towards egoism.

       Greene, based on his idea of accommodation, may say the ideal is still impartiality. After all, I acknowledged that values have different levels of impartiality. Absolutely, morality and values are partial, he may say, but we want to get as close to an ideal of impartial values as humanly possible. Stated differently, to resolve inter-moral conflict with the Greenian principle, it must excise the biasing nature of morality and values. The ideal would be elimination of partial values. Ultimately, that would mean the ideal is nihilism where values are not important. I don’t think Greene advocates this position, but it’s the logical end. Epistemological impartiality is already part of maximization, so the word “impartial” can be removed from the Greenian metamoral principle. Since partiality is now an important piece of metamorality, a pull towards egoism should be part of my new formulation: “maximize your core values”.

Fundamental Error

       I’ve reached an alternative metamoral principle which I expect would work for the Myrmidons to successfully start a Martian colony, but it’s not just another theory. “Maximize your core values” specifically addresses a fundamental error Greene consistently makes throughout many arguments: neglecting intentionality of values. The starkest expression of his error is in his discussion of how happiness is a core value: “None of us says: increase someone's happiness? Why would you want to do that (202)?” I’ll answer the question, after I explain how Greene goes wrong.

       Rather than explaining why someone would value happiness, he seems bewildered to even image that values can be just as diverse as moral beliefs. Greene makes no case for picking happiness as the value for metamorality to focus on, let alone the value an individual should focus on. All he attempts to argue for is the reducibility of all values to happiness, like an argument for psychological egoism where all reasons for acting reduce to self-interest. Greene goes as far to say “we all do things because we enjoy them and we all avoid things because we don't (202)”, which is the same as advocating psychological egoism. Some people, as bizarre as it may sound, literally do things because they don’t enjoy them, just as some people literally do things not for self-interest. Captain Tsung may think of the mission to Mars as a duty to his country, but explicitly acts against any interest he has. Family may have more meaning than country for him, yet he still goes out. Arguing that Tsung’s self-interest “really is” duty to country because it ultimately resulted in a consequence he wants ignores any intentionality that characterizes motivations and desires. Another astronaut, Sergeant Cooper, may follow Tsung’s orders to give Appriio extra food, yet he specifically does not enjoy doing so. He would enjoy exercising his rebellious spirit, except he goes quiet before questioning authority. Like Tsung’s case, someone could argue that Cooper “really” enjoys following orders, and “really” does not enjoy questioning authority. Similarly again, this argument ignores any intentionality of value – a characteristic I describe in “Impartiality Explained”. In fact, there is crossover between both arguments, since they converge on the same error: neglecting intentionality. As far as I see, Greene treats values as having nothing to do with an agent’s intentional states. Asking why I would want to increase someone’s happiness only sounds absurd if value is intrinsic.

       Intentionality is why impartiality doesn’t make sense for a metamoral principle – desires and motivations for action absent of partiality are impotent. Intentionality is also why prescribing happiness in a metamoral principle is problematic – it can’t be a common currency when even if all people desire happiness, it’s not going to translate into only seeking what they enjoy doing. Despite the issue of happiness, all actions are in pursuit of some core value – each Myrmidon can say anything from duty, to happiness, to glory, is his or her core value. Of course, absent any account of what function values serve, literally all means justify all ends. Someone’s intentionality needs to be in focus since impartiality won’t work. Each person maximizing their own core values works well since there is no forced idea of which exact value needs to be a metamoral standard. For what it’s worth, I believe happiness is the core value that everyone should pursue, but since the point of metamorality is inter-moral conflict, whatever my moral beliefs are doesn’t answer how to resolve the conflict. Acknowledging that all people pursue values is sufficient for a metamoral, though. A solution for the Myrmidons is going to be in establishing a situation where core values can be pursued by each agent as an individual. Applying my principle “maximize your core values” implies a simpler course of action: create an environment that is most conducive for pursuing values.

 

References

Bicchieri, Cristina. (2010). Behaving as Expected: Public Information and Fairness Norms. Journal of Behavioral Decision Making, 23: 161–178.

____. (2009). Do The Right Thing: But If Only Others Do So. Journal of Behavioral Decision Making, 22: 191–208.

____. (2006). The Rules We Live By. In: The Grammar of Society: The Nature and Dynamics of Social Norms. Cambridge University Press.

Dennett, D. (2013). Intuition Pumps and Other Tools for Thinking. W. W. Norton & Company.

DeScioli, P., Kurzban, R. (2013). A Solution to the Mysteries of Morality. Psychological Bulletin, 139(2): 477–496. DOI: 10.1037/a0029065

Gallistel, C. R. (1990). The Organization of Learning. Cambridge, MA: Bradford Books/MIT Press.

Greene, J. D. (2013). Moral Tribes. New York, NY: The Penguin Press.

____. (2007). The Secret Joke of Kant's Soul. In: Sinnott-Armstrong W Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Disease, and Development. Cambridge, MA: MIT Press.

Kant, I. (1785). Groundwork of the Metaphysics of Morals.

Nietzsche, F.W. (1888). Will to Power.

Pipes, R. (2003). Communism: A History.

Rand, A. (1979). The Cognitive Roles of Concepts. In: Introduction to Objectivist Epistemology.

____. (1961). The Objectivist Ethics. In: The Virtue of Selfishness.

Rousseau, J. J.  (1754). Discourse on the Origin and Basis of Inequality Among Men. Retrieved from http://www.constitution.org/jjr/ineq_03.htm


[1] Greene does not use this adjective, but he does not claim the common currency is in any sense metaphysical, as an Aristotelian essence may be, so I add the term “conceptual”.

[2] Those who object to the word “principle” may reasonably replace it with “heuristic”.

[3] I am not accusing Greene of sophistry, but if an error is pervasive, then any reasoning on top of it is rationalization or mere rhetoric.

[4] Greene in fact has plenty of potential to explore a political philosophy angle. See page 288.

[5] Or greene-r pastures!

[6] The recombination part is Greene’s own contribution, which sounds similar to what Rand (1961) believed to be part of concept-formation: “It is an actively sustained process of identifying one’s impressions in conceptual terms, of integrating every event and every observation into a conceptual context, of grasping relationships, differences, similarities in one’s perceptual material and of abstracting them into new concepts…”

 

[7] “It is then certain that compassion is a natural feeling, which, by moderating the violence of love of self in each individual, contributes to the preservation of the whole species. It is this compassion that hurries us without reflection to the relief of those who are in distress: it is this which in a state of nature supplies the place of laws, morals and virtues, with the advantage that none are tempted to disobey its gentle voice: it is this which will always prevent a sturdy savage from robbing a weak child or a feeble old man of the sustenance they may have with pain and difficulty acquired, if he sees a possibility of providing for himself by other means: it is this which, instead of inculcating that sublime maxim of rational justice.” Rousseu, 1754.

[8] Of course, attention may be a loaded term without any precise agreement amongst philosophers or even psychologists. My sense here is limited to very basic accessibility and awareness of having an end to act towards and recognition of ends existing at all. Stronger forms of awareness only help my argument.

[9] Lest the comparison overstates my point, the idea is that from an evolutionary angle, epiphenomena are neither helpful nor unhelpful to the existence of an organism.

[10] For those interested in science fiction, the book Stranger in a Strange Land by Robert A. Heinlein features Martians who are able to grasp incredibly complex goals and ideas all at once in totality, which they call ‘grokking’. They are nothing like humans cognitively or physically. It’s interesting to imagine creatures that are beyond humans computationally speaking, yet not just be genius IQ humans. Heinlein’s Martians think completely differently – they’re creatures with extreme emotional and introspective awareness.

[11] This isn’t positing constructivism where concepts have a non-empirical origin. My usage of constructed is closer to taking empirical evidence – a mapping of the world even – and mentally dividing that into wider (or narrower) portions.

[12]  I’m not distinguishing between belief, justified true belief, or any other fine distinctions. I’m just talking about facts one holds, however one defines ‘fact’.

[13] My idea of subdivisions is inspired by Rand (1979): “[The essence] of man’s incomparable cognitive power is the ability to reduce a vast amount of information to a minimal number of units—which is the task performed by his conceptual faculty.”

[14] “A good will is good not because of what it effects, or accomplishes, not because of its fitness to attain some intended end, but good just by its willing, i.e. in itself; and, considered by itself, it is to be esteemed beyond compare much higher than anything that could ever be brought about by it in favor of some inclinations, and indeed, if you will, the sum of all inclinations.” Kant, Groundwork of the Metaphysics of Morals.

[15] Any supposed non-embodied events or beings are completely absurd.

[16] Interesting to note is that Nietzsche believed Buddhism was ultimately nihilistic, even if a Buddhist’s professed beliefs are otherwise: “[Active nihilism’s] opposite: the weary nihilism that no longer attacks; its most famous form, Buddhism; a passive nihilism, a sign of weakness.” The Will to Power.

[17] I prefer the term intuition pump - a term described by Daniel Dennett - over thought experiment because Greene is trying to get me, the reader, to see my intuitive ideas and then reason about them.

[18] It’s amusing to note the contradiction of an omnibenevolent, omniscient, omnipotent god gains some implicit air time – it’s frankly cruel to not press the button. The Judeo-Christian God hasn’t ever “pressed the button”. The only way to fix the contradiction is to take happiness out of morality or put it as secondary – which Christianity does, to destructive ends.

[19] Selfish is a very arguable term to use when taken on a broad level such as a long-range view of my entire life. But throughout the book, Greene uses selfish in the narrow sense of immediate benefits whatever the cost is.

[20] Green explains how he’s a liberal on page 333, so egalitarianism in resource distribution is implied.

[21] This isn’t to say all relational facts are partial. If relational facts are stated precisely, they are impartial, i.e. “when facing the green wall, the red wall is to the left”. All facts may be viewed as relational, but this paper isn’t trying to prove that.

[22] This formulation of value is derivative of (Rand, 1961): “Value is that which one acts to gain and/or keep”

[23] To split epistemology and ethics into unrelated fields would be impossible though, supposing that epistemology is what one ought to believe and ethics is knowing how to act.

[24] Kant advocated utter disinterest of moral action, so I’m implicitly arguing against him at the same time.

[25] The only view of consciousness that I presume at a minimum here is an integrated awareness of mental states with mediation by intentional factors.

Link to comment
Share on other sites

I wrote this for a class on moral psychology. The class especially focused on philosophical ideas first (Hume and Adam Smith), and then proceeded to talk more about psychology. Joshua Greene was a guest twice, talking about his book Moral Tribes. I decided to write my paper criticizing a particular chapter.

This time I posted all in one thread, that way is easier to read in one sitting if you'd like.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...