What I'm Currently Working On

I haven’t uploaded anything to this blog in a while so I figured I would post a brief overview of what I’ve been thinking about and working on. I should start regularly uploading normal blog posts soon.

My current research is almost entirely based on a theory of belief formation and its implications for epistemology, rationality, and Streumer’s argument that we can’t believe a global normative error theory.

The theory of belief formation that I’m working with is called the Spinozan theory. The theory is situated as an alternative to the Cartesian theory of belief formation. The Spinozan theory says that we automatically form a belief that p whenever we consider that p. This means that the process of belief formation is automatic and outside of our conscious control. This theory has serious implications for several areas, such as rationality and epistemology.

In terms of epistemology, lots of philosophers working in that area will talk about belief formation in ways that presuppose a Cartesian theory. The Cartesian theory says that the process of belief formation and the process of belief revision are on par; both are within our conscious control. When we form a belief we base it on considerations like evidence. We consider the evidence for and against the proposition and then we form a belief. However, if the Spinozan theory is true then this is a misrepresentation of how we actually form beliefs. According to the Spinozan, we automatically form a belief whenever we consider a proposition. We may be able to revise our beliefs with conscious effort, but that process requires more mental energy than the process of forming a belief. If the Spinozan is right, we need to investigate whether or not we can do without talk of control over belief formation in epistemology.

The Spinozan theory entails that we believe lots of contradictory things. That we believe lots of contradictory things runs contrary to our ordinary view of ourselves as relatively rational creatures who do their best not to hold inconsistent beliefs. If any plausible account of rationality requires at least a lot of consistency among our beliefs, then we’re pretty screwed. But we might be able to work with a revisionary account of rationality that sees being rational as a constant process of pruning the contradictory beliefs from one’s mind through counterevidence. The problem with that sort of account, though, is that belief revision is an effortful process that is sensitive to cognitive load effects, whereas belief formation is automatic will occur whenever one considers a proposition. So, we’ll basically be on a rationality treadmill, especially in our current society where we’re bombarded with things that induce cognitive load effects.

Another project that I’m going to start working on is applying the Spinozan theory to propaganda. I think that somebody interested in designing very effective propaganda should utilize the Spinozan theory. For example, knowing that belief formation is automatic and occurs whenever a person considers a proposition would help one design some pretty effective propaganda, since one’s beliefs can root themselves in their mental processes such that they influence one’s behavior over time. If you throw in some cognitive load enhancing effects then you can make it more difficult for people to resist keeping their newly formed beliefs.

The last project I’m currently working on is a paper in which I argue against Bart Streumer’s case against believing the error theory. According to Streumer, one cannot believe a global normative error theory because one would believe that one has no reason to believe it, which we can’t do according to him. I think that if we work with the Spinozan theory then this is clearly false, since we automatically form beliefs about things that we have no reason to believe. My guess is that proponents of Streumer’s view will push back by arguing that they are talking about something different than I am when they use the word, “belief”. But I think that the Spinozan theory tracks the non-negotiable features of our ordinary conceptions of belief enough to qualify as an account of belief in the ordinary sense.

For those interested in the Spinozan theory, click this link. I should be regularly uploading posts here soon.


Seemings Zombies

Let’s assume that seemings are sui generis propositional attitudes that have a truthlike feel. On this view, seemings are distinct mental states from beliefs and other propositional attitudes. It at least seems conceivable to me that there could be a being that has many of the same sorts of mental states that we have except for seemings. I’ll call this being a seemings zombie.

The seemings zombie never has mental states where a proposition is presented to it as true in the sense that it has a truthlike feel. Would such a being engage in philosophical theorizing if presented with the opportunity? I’m not entirely sure whether the seemings zombie would have the right sort of motivation to engage in philosophizing. If we need seemings or something similar to them to motivate philosophical theorizing, then seemings zombies won’t be motivated to do it.

But do we need seemings to motivate philosophizing? I think we might need them if philosophizing includes some sort of commitment to a particular view. What could motivate us to adopt a particular view in philosophy besides the fact that that view seems true to us? I guess we could be motivated by the wealth and fame that comes along with being a professional philosopher, but I’m skeptical.

Maybe we don’t need to adopt a particular view to philosophize. In that case we could say that seemings zombies can philosophize without anything seeming true to them. They could be curious about conceptual connections or entailments of theories articulated by the great thinkers, and that could be sufficient to move them to philosophize. I’m not sure whether or not this would qualify as philosophizing in the sense many of us are acquainted with. Even people whose careers consist of the study of a historical figure’s intellectual works seem to commit themselves to a particular view about that figure. Kant interpreters have views about what Kant thought or argued for, and my guess is those views seem true to those interpreters.

The seemings zombies might still be able to philosophize, though. Maybe they would end up as skeptics, looking down on all of us doing philosophy motivated by seemings. We seemings havers end up being motivated by mental states whose connection to the subject matter they are motivating us to take stances on are tenuous at best. The seemings zombies would then adopt skeptical attitudes towards our philosophical views. But I’m still worried, because skeptics like to give us arguments for their views about knowledge, and my guess is a lot of sincere skeptics are motivated by the fact that skepticism seems true to them. I could just be naive, though; there may be skeptics who remain uncommitted to any philosophical view, including their skepticism. I’m just not sure how that’s supposed to work.

One reaction you might have to all of this is to think that seemings zombies are incoherent or not even prima facie conceivable. That may be true, but it doesn’t seem that way to me.


 

Mental Incorrigibility and Higher Order Seemings

Suppose that the phenomenal view of seemings is true. So, for it to seem to S that P, S must have a propositional attitude towards P that comes with a truthlike feel. Now suppose that we are not infallible when it comes to our own mental states. We cannot be absolutely certain that we are in a certain mental state. So, we can make mistakes when we judge whether or not it seems to us that P.

Now put it all together. In cases where S judges that it seems to her that P, but she is mistaken, what is going on? Did it actually seem to her that P or did she mistakenly judge that it did? If it’s the former, then it is unclear to me how S could mistakenly judge that it seems to her that P. Seeming states on the phenomenal view seem to be the sorts of mental states we should be aware of when we experience them. If it's the latter, then it is unclear whether higher order seemings can solve our problem.

If a subject is experiencing a seeming state and judges that it seems to her that P, then there has to be some sort of luck going on that disconnects the seeming state from her judgment such that she does not know that it seems to her that P. Maybe she’s very distracted when she focuses her awareness onto her seeming state to form her judgment and that generates the discrepancy. I’m not really sure how plausible such a proposal would ultimately be. Instead, if the subject is not actually in a seeming state, then we need to explain what is going on when she mistakenly judges that she is in one. One possibility is that there are higher order seemings. Such seemings take first order seemings as their contents. On this view, it could seem to us that it seems that P is the case.

The idea of higher order seemings repulses me, but it could be true. Or, in a more reductionist spirit, we could say that higher order seemings are just a form of introspective awareness of our first order seemings. But I am worried that such a proposal would reintroduce the original problem linked to fallibility. If I can mistakenly judge that it seems to be that it seems to me that P, then what is going on with that higher order (introspective) seeming? The issue seems to come back to bite us in the ass. But it might do that on any proposal about higher order seemings, assuming we have accepted that we are not infallible mental state detectors. Maybe we just need to accept a regress of seemings, or maybe we should stop talking about them. Like always, I’ll just throw my hands up in the air and get distracted by a different issue rather than come up with a concrete solution.

Costs, Benefits, and the Value of Philosophical Goods

Philosophy is a very diverse field in which practitioners employ different dialectical and rhetorical techniques to advance their views and critique those of their opponents. But despite the heterogeneity, there seems to me to be a prevailing attitude towards the overarching method by which we choose philosophical theories among contemporary philosophers. We are all supposed to acknowledge that there are no knockdown arguments in philosophy. Philosophical theories are not the sorts of beasts that can be vanquished with a quick deductive argument, mostly because there is no set of rules that we can use to determine if a theory has been refuted. Any proposed set of rules will be open to challenge by those who doubt them, and there will be no obvious way to determine who wins that fight.

So, the process by which we compare and choose among philosophical theories cannot be guided by which theories can be refuted by knockdown arguments, but rather we seem to be engaged in a form of cost-benefit analysis when we do philosophy. We look at a theory and consider whether its overall value outweighs that of its competitors, and then we adopt that theory rather than the others. One way of spelling this process out is in terms of reflective equilibrium; we consider the parts of the theories we are comparing and the intuitions we have about the subject matter that the theories are about, and then we weigh those parts and intuitions against each other. Once we reach some sort of state of equilibrium among our intuitions and the parts that compose the theory we’re considering adopting, we can be justified in believing that theory.

Reflective equilibrium seems to be the metaphilosopher’s dream, since it avoids the problems that plague the knockdown argument approach to theory selection, and it makes room for some level of reasonable disagreement among practitioners, since not everybody has the same intuitions, and the intuitions shared among colleagues may vary in strength (in both intrapersonal and interpersonal senses). Unfortunately for me, I worry a lot about how reliable our methods are at getting us to the truth, and the process I crudely spelled out above does not strike me as satisfactory.

To be brief, my concern is that we have no clear way of determining the values of the things we are trading when we do a philosophical cost-benefit analysis. In other cases of cost-benefit analyses, it seems obvious to me that we can make satisfactory judgments in light of the values of the goods we’re trading. If I buy a candy bar at the store on my way to work, I can (at least on reflection) determine that certain considerations clearly count in favor of purchasing the candy bar and others clearly count against it. But when I weigh intuitions against parts of theories and parts of theories against each other, I begin to lose my grasp on what the exchange rate is. How do I know when to trade off an intuition that really impresses itself upon me for a theory with simpler parts? Exactly how intense must an intuition be before it becomes practically non-negotiable when doing a philosophical cost-benefit analysis? Questions like these throw me for a loop, especially when I’m in a metaphysical realist mood. Perhaps anti-realists will have an easier time coping with this, but those sorts of views never satisfied me, because there are parts of philosophy (like some areas in metaphysics) that never really struck me as open to a complete anti-realist analysis, so at least for me global anti-realism is off the table. At the moment, I’m completely puzzled.

Why Veganism isn't Obligatory

I’ve written a bit about animal ethics on this blog, and most of it has been about animal rights. The sorts of rights that seem most plausible to ascribe to animals are negative rights, such as the right not to be unjustly harmed. If animals have rights, they probably have positive rights as well. For example, if you’re cruising around on your new boat with your dog, and you see that your dog fell overboard, it seems like your dog has the right to be rescued by you, assuming that you’re capable without endangering yourself or others. You’re obligated to rescue your dog, assuming that he has rights that can generate obligations for you. So, animals can have both positive and negative rights.

An interesting question that arises when we consider animal rights is if they generate obligations for us to become vegans. I take veganism to be a set of dietary habits that exclude almost all animal products. On my view, vegans can consume animal products in very specific situations. For example, if a vegan comes across a deer that has just died by being hit by a car, it is permissible for her to consume the deer and use its parts for whatever purposes she sees fit. However, circumstances like the dead deer are very rare and it’s doubtful that most vegans can survive off of those sorts of animal products, so most vegans will not consume any animal products. The sorts of vegans who hunt for the sorts of opportunities to consume animal products like in the case of roadkill are called, “freegans”. Other instances of vegan-friendly animal products are things found in the trash and things that have been stolen.

Most vegans would agree that purchasing chickens for your backyard and consuming the eggs they produce is impermissible. If they think animals have rights, then having backyard chickens might seem akin to owning slaves. In both instances, beings with rights are considered the property of people. So, owning chickens is a form of slavery according to this view. I want to challenge this view by using some arguments developed in a recent paper called, “In Defense of Backyard Chickens” by Bob Fischer and Josh Milburn.

Imagine that a person, call her Alice, studied chicken cognition and psychology such that she understood the best way to house chickens according to their needs. She builds the right sort of housing for chickens, she purchases high quality, nutritious feed for her chickens, and she makes sure they are safe from predators and the elements. Alice really cares about animal welfare, so her project is done in the interests of the chickens she plans to buy. She sees herself as giving the chickens a life they deserve in an environment best suited for their welfare. She then goes and buys some chickens and lets them loose in their new home. She tends to their needs and makes sure they’re comfortable. She then collects their eggs they lay and consumes them in various ways. I don’t think Alice done anything wrong, but some vegans may disagree.

To some vegans, it may seem like Alice has built slave quarters for her new egg-producing slaves. However, it seems to me that Alice has liberated the chickens in a way that’s analogous to an abolitionist buying the freedom of an enslaved human. If it’s permissible to buy the freedom of a slave by paying into an unjust institution like the slave-trade, then it seems like the same holds for buying the freedom of chickens. But, you may object, the chickens aren’t free! They’re still enclosed in Alice’s backyard, unable to leave. If you bought the freedom of a human and then put them in a backyard enclosure, we could hardly praise you as a liberator! Well, in the case of humans it’s wrong to force them into backyard enclosures. But that’s because the interests of humans are such that we make humans worse off by forcing them into enclosures in backyards. Humans aren’t the sorts of beings that need restrictions on their movement to guarantee their well-being. If anything, humans need free movement to have a high level of well-being. One of the reasons human slavery is so bad is because of the restriction on the freedom of movement of humans. Humans enjoy being able to go where they want; preventing that is to harm them.

When it comes to chickens, restricting their movement is actually in their interests. If we bought chickens and then just let them loose, they would probably die pretty quickly. Depending on where you release them and what time of the year it is, they could die of exposure or from predation. They could also walk into traffic and die, or they might end up starving because they won’t be able to find adequate nutrition. So, it seems like chicken interests don’t include complete freedom of movement, but rather some level of confinement for protection. Obviously not the level of confinement found in factory farms or even smaller commercial farms, but something that keeps predators and the elements out. So, the analogy between confining chickens and confining humans doesn’t hold, because it is in the interests of chickens and not humans to be confined to some extent.

One objection that might arise is that by buying chickens, Alice feeds into an unjust system that will only be perpetuated by your actions. Fair enough I guess, but it seems like the act of purchasing a few chickens is causally impotent with respect to furthering the unjust system of selling chickens for profit. If Alice didn’t buy those chickens, I doubt the store would have felt it, and the industry at large definitely wouldn’t feel it. The chickens probably would’ve been bought by somebody else, anyway, and they probably wouldn’t have been treated nearly as well as if Alice had bought them. But leaving that aside, this seems like a consequentialist objection. However, we’re in the land of the deontic with all of this rights talk, and it seems like chickens have a right to be rescued from their circumstances. So even if Alice somehow feeds into an unjust system by buying her chickens, that badness is outweighed or overridden by the right to rescue that those chickens have. If anything, Alice has an obligation to buy those chickens, given her ability to provide them with the lives to which they are entitled.

Another objection is that by purchasing chickens, Alice is treating them as property. Even if that’s true, it still seems better for the chickens that they are treated like property by Alice than by somebody less interested in their welfare. The chickens may have a right not to be owned, and perhaps Alice’s relationship to them is one of an owner, but it may still be in their interests to be owned by Alice. Their right not to be owned is outweighed by the potential harm they will experience if they’re bought by anybody else. Alice is their best bet. However, it is unclear that Alice is treating them as property. Another way of looking at this is Alice is buying the freedom of the chickens. They will no longer be the property of others. Instead, they get to live out their lives in the best conditions chickens can have. Now, you might respond by saying that living in Alice’s backyard isn’t true freedom because the chickens’ movement is restricted, but I already dealt with that objection above.

One last objection is that by obtaining and consuming eggs, Alice is illegitimately benefiting from something she’s allowed to do. This objection concedes that Alice can keep backyard chickens as long as she tends to their well-being sufficiently. But, the objection goes, Alice is illegitimately benefiting from her chickens. Perhaps the chickens also have a right to raise families, and by consuming their eggs Alice is depriving them of families. However, Alice could allow the chickens to procreate within limits. Obviously they cannot overpopulate the land they inhabit, because that would cause an overall decrease in well-being. In light of these considerations, Alice cannot allow every egg to result in a new chicken, so it seems like she can remove excess eggs from the chickens’ homes.

Maybe the chickens have property rights over their eggs. By taking the eggs, Alice is effectively stealing from her chickens. It isn’t clear to me that animals have property rights, but maybe they do. Even if the chickens own their eggs, it seems like Alice can collect some of them as a form of rent. There is, then, mutual benefit between Alice and the chickens. Alice gives the chickens a place to live and food, and in return Alice gets some of their eggs. The relationship between Alice and her chickens is closer to people renting a place to live and their landlord than it is to a thief and her victims, or squatters and a landowner.

Could the eggs be used for something more noble than as Alice’s food? Maybe, but it still seems permissible for Alice to eat the eggs. Sure, she could donate them or use them to feed other animals, but it seems like a stretch to say that Alice has an obligation not to consume the eggs and instead give them away. Even if it’s better that she gives them away, she’s still allowed to consume them. There are actions that are permissible even if they aren’t optimal, and Alice consuming the eggs seems to qualify.

If I’m right, and Alice is allowed to consume the eggs she collects, then Alice is not obligated to be a vegan. Eggs are animal products and pretty much every vegan would say that you shouldn’t eat them. So, it seems like veganism is not obligatory. Consuming animal products can sometimes be permissible if they’re obtained in the right way.

This post has been heavily influenced by a recent paper by Bob Fischer and Josh Milburn. Their paper articulated a lot of the thoughts I’ve had about veganism and moral obligations better than I could. Pretty much all of the arguments, objections, and responses draw from their paper. I wrote this post to summarize some of their arguments, and to draw attention to their paper. Bob Fischer is my favorite philosopher working on animal ethics. I recommend all of his stuff.

Check out their paper here.
Check out Bob Fischer’s work here.

Why Verificationism isn't Self-Refuting

In the early to mid Twentieth Century, there was a philosophical movement stemming from Austria that aimed to do away with metaphysics. The movement has come to be called Logical Positivism or Logical Empiricism, and it is widely seen as a discredited research program in philosophy (among other fields). One of the often repeated reasons that Logical Empiricism is untenable is that the criterion the positivists employed to demarcate the meaningful from the meaningless, when applied to itself, is meaningless, and therefore it refutes itself. In this post, I aim to show that the positivists’ criterion does not result in self-refutation.

Doing away with metaphysics is a rather ambiguous aim. One can take it to mean that we ought to rid universities of metaphysicians, encourage people to cease writing and publishing books and papers on the topic, and adjust our natural language such that it does not commit us to metaphysical claims. Another method of doing away with metaphysics is by discrediting it as an area of study. Logical Positivists saw the former interpretation of their aim as an eventual outgrowth of the latter interpretation. The positivists generally took their immediate goal to be discrediting metaphysics as a field of study, and probably hoped that the latter goal of removing metaphysics from the academy would follow.

Discrediting metaphysics can be a difficult task. The positivists’ strategy was to target the language used in expressing metaphysical theses. If the language that metaphysicians employed was only apparently meaningful, but underneath the surface it was cognitively meaningless, then the language of metaphysics would consist of meaningless utterances. Cognitive meaning consists of a statement being truth-apt, or having truth conditions. If a statement isn’t truth-apt, then it is cognitively meaningless, but it can serve other linguistic functions besides assertion (e.g. ordering somebody to do something isn’t truth-apt, but it has a linguistic function).

If metaphysics is a discourse that purports to be in the business of assertion, yet it consists entirely of cognitively meaningless statements, then it is a failure as a field of study. But how did the positivists aim to demonstrate that metaphysics is a cognitively meaningless enterprise? The answer is by providing a criterion to demarcate cognitively meaningful statements from cognitively meaningless statements.

The positivists were enamored with Hume’s fork, which is the distinction between relations of ideas and matters of fact, or, in Kant’s terminology, the analytic and the synthetic. The distinction was applied to all cognitively meaningful statements. So, for any cognitively meaningful statement, it is necessarily the case that it is either analytic or synthetic (but not both). The positivists took the criterion of analyticity to be a statement’s negation entailing a contradiction. Anything whose negation does not entail a contradiction would be synthetic. Analytic statements, for the positivists, were not about extra-linguistic reality, but instead were about concepts and definitions (and maybe rules). Any claim about extra-linguistic reality was synthetic, and any synthetic claim was about extra-linguistic reality.

Synthetic statements were taken to be cognitively meaningful just if they could be empirically confirmed. The only other cognitively meaningful statements for the positivists were analytic statements and contradictions. This is an informal statement of the verificationist criterion for meaningfulness. Verificationism was the way that the positivists discredited metaphysics as a cognitively meaningless discipline. If metaphysics consisted of synthetic statements that could not be empirically confirmed (e.g. the nature of possible worlds), then metaphysics consisted of cognitively meaningless statements. In short, the positivists took a non-cognitivist interpretation of the language used in metaphysics.    

Conventional wisdom says that verificationism, when applied to itself, results in self-refutation, which means that the positivists’ project is an utter failure. But why does it result in self-refutation? One reason is that it is either analytic or synthetic, but it doesn’t appear to be analytic, so it must be synthetic. But if the verificationist criterion is synthetic, then it must be empirically confirmable. Unfortunately, verificationism is not empirically confirmable, so it is cognitively meaningless. Verificationism, then, is in the same boat with metaphysics.

Fortunately for the positivists, the argument above fails. First off, there are ways to interpret verificationism such that it is subject to empirical confirmation. Verificationism could express a thesis that aims to capture or explicate the ordinary concept of meaning (Surovell 2013). If it aims to capture the ordinary concept of meaning, then it could be confirmed by studying how users of the concept MEANING could employ it in discourse. If such concept users employ the concept in the way the verificationist criterion says it does, then it is confirmed. So, given that understanding of verificationism, it is cognitively meaningful. If verificationism aims to explicate the ordinary concept of meaning, then it would be allowed more leeway when it deviates from standard usage of ordinary concept in light of its advantages within a comprehensive theory (Surovell 2013). Verificationism construed as an explication of the ordinary concept of meaning, then, would be subject to empirical confirmation if the overall theory it contributes to is confirmed.

Secondly, if one takes the position traditionally attributed to Carnap, then one can say that the verificationist criterion is not internal to a language, but external. It is a recommendation to use language in a particular way that admits of only empirically confirmable, analytic, and contradictory statements. Recommendations are not truth-apt, yet they serve important linguistic functions. So, verificationism may be construed non-cognitively, as a recommendation motivated by pragmatic reasons. There’s nothing self-refuting about that.  

Lastly, one could take verificationism to be internal to a language, in Carnap’s sense, and analytic. However, the criterion would not aim to capture the ordinary notion of meaning, but instead it would be a replacement of that notion. Carnap appears to endorse this way of construing verificationism in the following passage,

“It would be advisable to avoid the terms ‘meaningful’ and ‘meaningless’ in this and in similar discussions . . . and to replace them with an expression of the form “a . . . sentence of L”; expressions of this form will then refer to a specified language and will contain at the place ‘. . .’ an adjective which indicates the methodological character of the sentence, e.g. whether or not that sentence (and its negation) is verifiable or completely or incompletely confirmable or completely or incompletely testable and the like, according to what is intended by ‘meaningful’” (Carnap 1936).

Rather than documenting the way ordinary users of language deploy the concept MEANING, Carnap appears to be proposing a replacement for the ordinary concept of meaning. The statement of verificationism is internal to the language in which expressions of meaning are replaced with “a . . . sentence of L” where ‘. . .’ is an adjective that indicates whether or not the sentence is verifiable, and thus is analytic in that language. The motivation for adopting verificationism thus construed would then be dependent on the theoretical and pragmatic advantages of using that language.

So, verificationism can be construed as synthetic, analytic, or cognitively meaningless. It could be considered a recommendation to use language in a certain way, and that recommendation is then motivated by pragmatic reasons (or other reasons), which makes it cognitively meaningless but linguistically useful, which does not result in self-refutation. Or, it could be considered a conventional definition aimed to capture or explicate the ordinary concept of meaning. It would then be verifiable because it could be confirmed by an empirical investigation into the way people use the ordinary notion of meaning, or by its overall theoretical merits. Lastly, it could be internal to a language, and thus analytic, but not an attempt at capturing the ordinary notion of meaning. Instead, it would be a replacement that served a particular function within a particular language that is itself chosen for pragmatic (non-cognitive) reasons. In any of these construals, verificationism is not self-refuting.

Works Cited:

Carnap, Rudolf. "Testability and Meaning - Continued." Philosophy of Science. 1936. Web.

Surovell, Jonathan. "Carnap’s Response to the Charge that Verificationism is Self-Undermining." 2013. Web.

 

An Introduction to Morality and Emotions

When doing moral theory, the question of emotion will inevitably arise. Some theorists think that emotions should not play any role because they are antithetical to reliable moral reasoning. Others doubt that emotions are a wholly distorting influence. In this post, I’m going to lay out some ways in which emotions may feature in our theorizing about morality.

A popular view of emotion is to take them as intentional states that present their objects in an evaluative light. For instance, being happy about graduating from college is to have the state of affairs of graduating from college being presented to a subject such that she has certain positive feelings towards it. The way in which this view of emotion can be relevant to moral theorizing is when the object of emotion is a moral state of affairs. Your emotions get moralized in this sense when they are about moral states of affairs.

Another way in which emotions are relevant to morality is if they provide us access to moral facts. If emotions are our means of epistemic contact with moral reality, then emotions are epistemically relevant to morality. Emotions may then be ways of representing states of affairs with a certain sensitivity to morally salient features of what’s being represented. One simplistic possibility is that our emotional reaction to the idea of pushing a man off a bridge to stop a train that is headed for five people tied to the track provides us with epistemic access to the separateness of persons, which explains why it’s wrong to push the man to his death.

However, there may be a flip-side to the epistemic view of emotions. Emotions could also distort our sensitivity to morally salient features of states of affairs. Peter Singer has defended a view similar to this when he argued that deontological intuitions are subject to distorting influences rooted in our evolutionary development.

Emotions can also be the ways in which we are motivated to act morally. It could be the case that we need emotions to move us to act morally, which would make emotions necessary for moral action. On this view, a robot with the set of true moral beliefs would be unmoved to act on them if it is incapable of experiencing emotions. Mere belief is insufficient on this account of moral emotions.

We may also be subject to evaluation based on the emotions we experience. There are clearly good and bad ways to behave at a funeral. If somebody began laughing uncontrollably, we would probably consider that to be inappropriate, whereas we would be tolerant of grieving in the form of loud crying. A similar view is defended by Justin D’Arms and Daniel Jacobson.

One last way that emotions can be relevant to moral theorizing is if they are integral to our moral development. Perhaps eliciting certain emotions is a necessary means of moral education. Making developing moral agents experience things like guilt over wrongdoing by pointing out how they’ve let a loved one down could be formative for them. In this sense, emotions are part of the development of moral agents.

There are probably other ways in which emotions are morally relevant that I’ve missed. If you are aware of any more, let me know in the comments section below.

A Problem for the New Consequentialism

In a previous post, I outlined a non-deontic form of consequentialism that was supposed to avoid what I called the extension problem. The extension problem plagues deontic consequentialism, which is the view that the rightness, wrongness, permissibility, and impermissibility of actions are determined by their consequences. So, a simple hedonistic act utilitarian will say that there is one categorically binding duty, and that is to maximize pleasure when we act. But such a view suffers from intuitively compelling counterexamples. So it seems like hedonistic act utilitarianism gets the extension of our deontic concepts wrong.

Non-deontic consequentialism is designed to avoid the extension problem, because it defers how those concepts are applied by a society at a given time. By doing so, the theory allows for the extensions of our deontic concepts to pick out what our society takes them to be, which seems to preserve our intuitions about particular cases, like the drifter being killed by a surgeon for his organs. Hedonistic act utilitarianism requires that, if the surgeon is in the epistemic situation where he can rule out negative consequences, and he knows that he can use these organs to save five patients, then he is duty-bound to kill the drifter and harvest the organs. Non-deontic consequentialism avoids this because your typical person who is not a thoroughly committed act utilitarian would not agree that the extension of DUTY covers the surgeon’s organ harvesting endeavor.

An alternative that avoids the extension problem is scalar utilitarianism, which does without deontic concepts like RIGHT and WRONG. Instead, we judge actions as better or worse than available alternatives. The problem with this view is that it just seems obvious that it is wrong to torture puppies for fun. But a scalar utilitarian cannot give an adequate account of what makes that act wrong, so she must explain why it seems so obvious to say that it is wrong to torture puppies, even though it’s false.

Setting aside both of these forms of consequentialism, I want to discuss the non-deontic consequentialism I outlined in my other post. On the view I described, the rightness and wrongness, along with other deontic properties, of actions are a function of the social conventions that obtain at a given time in a given society. The consequentialism comes in at the level of critiquing and improving those social conventions.

Moral progress occurs when we adopt social conventions that are better by consequentialist standards. So, for instance, it used to be a social convention in the United States that we could have property rights over other human beings, and transfer those rights for currency. Those conventions are no longer in place in the United States, and at the time they were, they could have been critiqued by consequentialist standards. Those conventions were not better than available alternatives at the time, so it would have been better not to have the institution of chattel slavery. But these facts about betterness do not determine what is right or wrong. Rather, they should guide efforts to improve social conventions, and thereby change the extensions of our deontic concepts.

This seems all well and good, but I am a bit worried. This view entails that social conventions have normative force, no matter what. So, just because something is a social convention, we thereby have at least some moral reason to abide by it. Take slavery again; such an institution was once enshrined in many social conventions. Does it follow that at the time, everybody had at least some moral reason to abide by the conventions that said we ought to return escaped slaves to their so-called owners? It seems to me that slavery is and always was wrong. There was never a time at which it was right to own another human being. I think that the basis of my concern is that deontic judgments, especially when applied to important things like slavery, are not indexed to times and places. Just because a human being is sold in a marketplace in 1790 Virginia does not change the deontic status of the situation. What exactly is the morally relevant difference between that time period and today? Why is it wrong now to sell another human being but it was not in 1790s Virginia?

One potential response to my worries is to point out that I’m making these judgments from a particular time period when the extension of our deontic concepts rules out slavery being permissible. So, perhaps I find the entailment of this theory appalling because my intuitions are shaped by the extension of the deontic concepts I use. Since 1790s Virginia, we have undergone moral progress, and now it is wrong to own slaves because of the shift in social conventions. It could even be that according to our deontic concepts’ extensions now, it was wrong in the 1790s to buy and sell slaves.

I think these considerations certainly make my concerns less worrisome. But I’m experiencing a residual anxiety. It still seems counterintuitive to say that, if we had grown up in 1790s Virginia, our claims about the rightness and wrongness would be flipped. We would have an inverted moral spectrum when it comes to deontic judgments about slavery. That is what I find counterintuitive. The theory was developed to explicitly address the extension problem, which was that deontic consequentialists seem to get the extensions of our deontic concepts wrong. The reason I think that they get those extensions wrong is because their theories entail counterintuitive results. They end up having to bite a lot of bullets, such as the organ harvesting surgeon. But if non-deontic consequentialism also generates counterintuitive entailments, like slavery being permissible in 1790s Virginia for people at that time, then is it any better than its deontic consequentialist competitors?




 

A New Consequentialism

Consequentialism is a family of theories that takes the consequences of actions to be the location of the right-making or good-making features of those actions. For the sake of simplicity, let’s work with a very basic consequentialist view, which is that ought to maximize the good. The good is identified with happiness. So, we ought to maximize happiness with our actions.

The problem with this view is that it says the right thing to do, what we ought to do, is maximize happiness. However, intuitively, there are situations where maximizing happiness is not what we ought to do. For instance, nobody but the most committed act utilitarian would say that it’s ok to kill a homeless person to supply his organs to five needy recipients, even if nobody would ever find out.

So, this simple consequentialism fails to give a satisfying analysis of deontic concepts, like RIGHT and WRONG. In other words, it gives the wrong application conditions for RIGHT and WRONG, because it entails that certain actions which fall within the extension of WRONG actually fall within the extension of RIGHT.

What could we do to revise our simple consequentialism? Well, we could try not giving an analysis of deontic concepts. So, we could become scalar utilitarians, which is to say we could be people who think actions are ranked on a scale from best to worst. Maybe moral judgments that involve deontic concepts are just wrongheaded. We could just do without concepts like RIGHT and WRONG. Instead, let’s just talk about better or worse actions; actions which we have more or less reason to do.

This just isn’t satisfying, though. Clearly torturing children for fun isn’t just worse than not torturing them for fun, it’s wrong. We ought not to torture children for fun. There’s nothing wrongheaded about that moral judgment. So, we need to give an account of deontic concepts if we want a theory that captures what we do when we engage in moral discourse and deliberation.

Here is what I take to be the best way to deal with this problem. If we try to give a consequentialist analysis of deontic concepts, we get the extensions of those concepts wrong. If we try to avoid giving an analysis, then we exclude a large portion of our moral discourse from our theory. So, we should analyze deontic concepts as conventions based on contingent social arrangements. We still should employ deontic concepts in moral judgment, and they play an indispensable role in our moral lives. But they do not reflect some fundamental structure of the moral world; rather, they reflect contingent social arrangements.

The role that consequentialism can play in this theory is as a means by which we can critique these contingent social arrangements. So, we could give consequentialist critiques of the ways in which deontic concepts are deployed in specific classes of moral judgments. For instance, if the concept RIGHT once had within its extension returning escaped slaves to their so-called owners, then that deontic concept could be revised according to a consequentialist critique of the institution of slavery. Our deontic moral judgments, judgments of right and wrong, permissibility and impermissibility, are ultimately subject to a consequentialist evaluation if the need arises.

Is this just rule utilitarianism? I don’t think so. Typically, rule utilitarians think we ought to obey a certain idealized set of rules which pass the consequentialist test of goodness-maximization. What I’m proposing is that we work with the rules we already have, and revise as the need arises, rather than reason according to an idealized set of good-maximizing rules. Besides, a rule utilitarian analysis of deontic concepts will probably fall victim to the extension problem I raised above against our simple consequentialist analysis.

Check out Brian McElwee's paper on consequentialism for a similar account of non-deontic consequentialism that I based this post on.