What I'm Currently Working On

I haven’t uploaded anything to this blog in a while so I figured I would post a brief overview of what I’ve been thinking about and working on. I should start regularly uploading normal blog posts soon.

My current research is almost entirely based on a theory of belief formation and its implications for epistemology, rationality, and Streumer’s argument that we can’t believe a global normative error theory.

The theory of belief formation that I’m working with is called the Spinozan theory. The theory is situated as an alternative to the Cartesian theory of belief formation. The Spinozan theory says that we automatically form a belief that p whenever we consider that p. This means that the process of belief formation is automatic and outside of our conscious control. This theory has serious implications for several areas, such as rationality and epistemology.

In terms of epistemology, lots of philosophers working in that area will talk about belief formation in ways that presuppose a Cartesian theory. The Cartesian theory says that the process of belief formation and the process of belief revision are on par; both are within our conscious control. When we form a belief we base it on considerations like evidence. We consider the evidence for and against the proposition and then we form a belief. However, if the Spinozan theory is true then this is a misrepresentation of how we actually form beliefs. According to the Spinozan, we automatically form a belief whenever we consider a proposition. We may be able to revise our beliefs with conscious effort, but that process requires more mental energy than the process of forming a belief. If the Spinozan is right, we need to investigate whether or not we can do without talk of control over belief formation in epistemology.

The Spinozan theory entails that we believe lots of contradictory things. That we believe lots of contradictory things runs contrary to our ordinary view of ourselves as relatively rational creatures who do their best not to hold inconsistent beliefs. If any plausible account of rationality requires at least a lot of consistency among our beliefs, then we’re pretty screwed. But we might be able to work with a revisionary account of rationality that sees being rational as a constant process of pruning the contradictory beliefs from one’s mind through counterevidence. The problem with that sort of account, though, is that belief revision is an effortful process that is sensitive to cognitive load effects, whereas belief formation is automatic will occur whenever one considers a proposition. So, we’ll basically be on a rationality treadmill, especially in our current society where we’re bombarded with things that induce cognitive load effects.

Another project that I’m going to start working on is applying the Spinozan theory to propaganda. I think that somebody interested in designing very effective propaganda should utilize the Spinozan theory. For example, knowing that belief formation is automatic and occurs whenever a person considers a proposition would help one design some pretty effective propaganda, since one’s beliefs can root themselves in their mental processes such that they influence one’s behavior over time. If you throw in some cognitive load enhancing effects then you can make it more difficult for people to resist keeping their newly formed beliefs.

The last project I’m currently working on is a paper in which I argue against Bart Streumer’s case against believing the error theory. According to Streumer, one cannot believe a global normative error theory because one would believe that one has no reason to believe it, which we can’t do according to him. I think that if we work with the Spinozan theory then this is clearly false, since we automatically form beliefs about things that we have no reason to believe. My guess is that proponents of Streumer’s view will push back by arguing that they are talking about something different than I am when they use the word, “belief”. But I think that the Spinozan theory tracks the non-negotiable features of our ordinary conceptions of belief enough to qualify as an account of belief in the ordinary sense.

For those interested in the Spinozan theory, click this link. I should be regularly uploading posts here soon.


Seemings Zombies

Let’s assume that seemings are sui generis propositional attitudes that have a truthlike feel. On this view, seemings are distinct mental states from beliefs and other propositional attitudes. It at least seems conceivable to me that there could be a being that has many of the same sorts of mental states that we have except for seemings. I’ll call this being a seemings zombie.

The seemings zombie never has mental states where a proposition is presented to it as true in the sense that it has a truthlike feel. Would such a being engage in philosophical theorizing if presented with the opportunity? I’m not entirely sure whether the seemings zombie would have the right sort of motivation to engage in philosophizing. If we need seemings or something similar to them to motivate philosophical theorizing, then seemings zombies won’t be motivated to do it.

But do we need seemings to motivate philosophizing? I think we might need them if philosophizing includes some sort of commitment to a particular view. What could motivate us to adopt a particular view in philosophy besides the fact that that view seems true to us? I guess we could be motivated by the wealth and fame that comes along with being a professional philosopher, but I’m skeptical.

Maybe we don’t need to adopt a particular view to philosophize. In that case we could say that seemings zombies can philosophize without anything seeming true to them. They could be curious about conceptual connections or entailments of theories articulated by the great thinkers, and that could be sufficient to move them to philosophize. I’m not sure whether or not this would qualify as philosophizing in the sense many of us are acquainted with. Even people whose careers consist of the study of a historical figure’s intellectual works seem to commit themselves to a particular view about that figure. Kant interpreters have views about what Kant thought or argued for, and my guess is those views seem true to those interpreters.

The seemings zombies might still be able to philosophize, though. Maybe they would end up as skeptics, looking down on all of us doing philosophy motivated by seemings. We seemings havers end up being motivated by mental states whose connection to the subject matter they are motivating us to take stances on are tenuous at best. The seemings zombies would then adopt skeptical attitudes towards our philosophical views. But I’m still worried, because skeptics like to give us arguments for their views about knowledge, and my guess is a lot of sincere skeptics are motivated by the fact that skepticism seems true to them. I could just be naive, though; there may be skeptics who remain uncommitted to any philosophical view, including their skepticism. I’m just not sure how that’s supposed to work.

One reaction you might have to all of this is to think that seemings zombies are incoherent or not even prima facie conceivable. That may be true, but it doesn’t seem that way to me.


 

Mental Incorrigibility and Higher Order Seemings

Suppose that the phenomenal view of seemings is true. So, for it to seem to S that P, S must have a propositional attitude towards P that comes with a truthlike feel. Now suppose that we are not infallible when it comes to our own mental states. We cannot be absolutely certain that we are in a certain mental state. So, we can make mistakes when we judge whether or not it seems to us that P.

Now put it all together. In cases where S judges that it seems to her that P, but she is mistaken, what is going on? Did it actually seem to her that P or did she mistakenly judge that it did? If it’s the former, then it is unclear to me how S could mistakenly judge that it seems to her that P. Seeming states on the phenomenal view seem to be the sorts of mental states we should be aware of when we experience them. If it's the latter, then it is unclear whether higher order seemings can solve our problem.

If a subject is experiencing a seeming state and judges that it seems to her that P, then there has to be some sort of luck going on that disconnects the seeming state from her judgment such that she does not know that it seems to her that P. Maybe she’s very distracted when she focuses her awareness onto her seeming state to form her judgment and that generates the discrepancy. I’m not really sure how plausible such a proposal would ultimately be. Instead, if the subject is not actually in a seeming state, then we need to explain what is going on when she mistakenly judges that she is in one. One possibility is that there are higher order seemings. Such seemings take first order seemings as their contents. On this view, it could seem to us that it seems that P is the case.

The idea of higher order seemings repulses me, but it could be true. Or, in a more reductionist spirit, we could say that higher order seemings are just a form of introspective awareness of our first order seemings. But I am worried that such a proposal would reintroduce the original problem linked to fallibility. If I can mistakenly judge that it seems to be that it seems to me that P, then what is going on with that higher order (introspective) seeming? The issue seems to come back to bite us in the ass. But it might do that on any proposal about higher order seemings, assuming we have accepted that we are not infallible mental state detectors. Maybe we just need to accept a regress of seemings, or maybe we should stop talking about them. Like always, I’ll just throw my hands up in the air and get distracted by a different issue rather than come up with a concrete solution.

Costs, Benefits, and the Value of Philosophical Goods

Philosophy is a very diverse field in which practitioners employ different dialectical and rhetorical techniques to advance their views and critique those of their opponents. But despite the heterogeneity, there seems to me to be a prevailing attitude towards the overarching method by which we choose philosophical theories among contemporary philosophers. We are all supposed to acknowledge that there are no knockdown arguments in philosophy. Philosophical theories are not the sorts of beasts that can be vanquished with a quick deductive argument, mostly because there is no set of rules that we can use to determine if a theory has been refuted. Any proposed set of rules will be open to challenge by those who doubt them, and there will be no obvious way to determine who wins that fight.

So, the process by which we compare and choose among philosophical theories cannot be guided by which theories can be refuted by knockdown arguments, but rather we seem to be engaged in a form of cost-benefit analysis when we do philosophy. We look at a theory and consider whether its overall value outweighs that of its competitors, and then we adopt that theory rather than the others. One way of spelling this process out is in terms of reflective equilibrium; we consider the parts of the theories we are comparing and the intuitions we have about the subject matter that the theories are about, and then we weigh those parts and intuitions against each other. Once we reach some sort of state of equilibrium among our intuitions and the parts that compose the theory we’re considering adopting, we can be justified in believing that theory.

Reflective equilibrium seems to be the metaphilosopher’s dream, since it avoids the problems that plague the knockdown argument approach to theory selection, and it makes room for some level of reasonable disagreement among practitioners, since not everybody has the same intuitions, and the intuitions shared among colleagues may vary in strength (in both intrapersonal and interpersonal senses). Unfortunately for me, I worry a lot about how reliable our methods are at getting us to the truth, and the process I crudely spelled out above does not strike me as satisfactory.

To be brief, my concern is that we have no clear way of determining the values of the things we are trading when we do a philosophical cost-benefit analysis. In other cases of cost-benefit analyses, it seems obvious to me that we can make satisfactory judgments in light of the values of the goods we’re trading. If I buy a candy bar at the store on my way to work, I can (at least on reflection) determine that certain considerations clearly count in favor of purchasing the candy bar and others clearly count against it. But when I weigh intuitions against parts of theories and parts of theories against each other, I begin to lose my grasp on what the exchange rate is. How do I know when to trade off an intuition that really impresses itself upon me for a theory with simpler parts? Exactly how intense must an intuition be before it becomes practically non-negotiable when doing a philosophical cost-benefit analysis? Questions like these throw me for a loop, especially when I’m in a metaphysical realist mood. Perhaps anti-realists will have an easier time coping with this, but those sorts of views never satisfied me, because there are parts of philosophy (like some areas in metaphysics) that never really struck me as open to a complete anti-realist analysis, so at least for me global anti-realism is off the table. At the moment, I’m completely puzzled.

An Introduction to Abhidharma Metaphysics

The Abhidharma school of Indian Buddhism represents one of the earliest attempts to form a complete, coherent philosophical system based on the teachings of the Buddha. Abhidharma metaphysics rests on mereological reductionism: the claim that wholes are reducible to their parts. On the Abhidharma view, a composite object like a table is nothing more than its parts arranged table-wise. The “table” is a convenient designator based on our shared interests and social conventions. Crucially, for Abhidharma Buddhists, this also extended to the self. The self, rather than being an enduring substance, is reducible to a bundle of momentary mental states (Carpenter, 2).

Based on this principle of reductionism, Abhidharma went on to develop the Doctrine of Two Truths. A statement is “conventionally true” is if it is based on our commonsense view of the world, and leads to successful practice in daily life. Thus, it is conventionally true that macro objects such as tables and chairs exist. A statement is “ultimately true” if it corresponds to the facts as they are, independent of any human conventions. According to the Abhidharma view, the only statements that can be considered ultimately true are statements about ontological simples: entities that cannot be further broken down into parts. The tendency to think that statements involving composite objects like tables are ultimately true arises when we project our interests and conventions on to the world.

The primary opponents of the Abhidharma Buddhists were philosophers of the Nyāya orthodox tradition, about which I have written before. Nyāya philosophers were unflinching commonsense realists. They held that wholes existed over and above their parts. The word “table” is not merely a convenient designator or a projection of our interests on to the world, it is a real object that cannot be reduced to its parts. Nyāya held that there are simple substances and composite substances. Simple substances are self-existent and eternal. Composite substances depend on simple substances for their existence, but cannot be reduced to them. They possess qualities that are numerically distinct from the qualities of their component parts.

There are some obvious difficulties with the view that wholes exist in addition to their parts, and Abhidharma philosophers were quick to point this out. If the table exists in addition to its parts, it would follow that whenever we look at a table, we are looking at two different entities – the component parts and the (whole) table. How can two different objects share the same location in space? Nyāya philosophers responded by stating that wholes are connected to parts by the relation of inherence. In Nyāya metaphysics, inherence is an ontological primitive, a category that cannot be further analyzed in terms of something else. To put it very crudely, inherence functioned as a kind of metaphysical glue in the Nyāya system. The inherence relation is what connects qualities to substances. The quality redness inheres in a red rose. Similarly, the inherence relation also connects wholes with their parts. In this case, the whole – the table – inheres in its component parts.

At this point in the debate, the standard Abhidharma move was to ask how exactly wholes are related to their parts. Do wholes inhere wholly or partially in their parts? If wholes are real and not reducible to their parts, but nonetheless inhere only partially in their parts, it would mean that there is a further ontological division at play. We now have three different kinds of entities. The parts of the table, the parts of the whole that inhere in the parts of the table, and the whole. Now, what is the relation between the whole and the parts of the whole that inhere in the parts of the table? Does the whole inhere wholly or partially in the second set of parts? If it is the former, then the second set of parts becomes redundant, for the whole could simply inhere wholly in the first set of parts (that is, the parts of the table). If the whole inheres partially in the second set of parts, then we will have to introduce yet another whole-part distinction, and there is an obvious infinite regress looming.

The Nyāya school held that wholes inhere wholly in their component parts. They drew an analogy with universals to make the illustration clear. Just as the universal cowness inheres in every individual cow, the table inheres wholly in every one of its individual parts. 

Abhidharma philosophers rasied a second set of difficulties for Nyāya. Consider a piece of cloth woven from different threads. According to the Nyāya view, the cloth is a substance that is not merely reducible to the threads. But now let us suppose I cannot see the whole cloth. Let us suppose most of the cloth is obscured from my view, and I only see a single thread. In this case, we would not say that I have seen the cloth. I am not even aware that there is a cloth – I think there is just a single thread. But if the Nyāya view is correct, then the cloth (the whole) inheres in every single thread, so when I see the thread, I should see the cloth as well. But since I don’t, it follows that the Nyāya view is incorrect.  

Now consider a piece cloth woven from both red and black threads. Since the cloth is a separate substance, and since composite substances possess qualities numerically distinct from their component parts, the cloth must have its own color. But is the color of the cloth red or black? Nyāya responded that the color of the cloth is neither red nor black, but a distinct “variegated” color (Siderits, 111). But this only multiplies difficulties. If the cloth is wholly present in its parts, and it possesses its own variegated color, why do I not see the variegated color when I look at its component parts? When I look at the red threads, all I see is red, and when I look at the black threads, all I see is black. I do not see the variegated color in the component parts and yet, if the whole inheres wholly in its parts, I should.

Finally, if the whole is a distinct substance over and above its parts, the weight of the whole must be greater than the sum of its parts. But we do not observe this when we weigh composite substances. This is highly mysterious on the Nyāya view. But these problems are all avoided if we simply accept that wholes are reducible to their parts.

Abhidharma is a broad tradition that encompasses numerous sub-schools. Two of the most prominent ones are Vaibhāṣika and Sautrāntika. While both sub-schools agree that everything is reducible to ontological simples, they disagree on the number and nature of these simples. The Vaibhāṣika school is fairly liberal in its postulation of simples, while Sautrāntika is conservative. Moreover, Vaibhāṣika treats simples as bearers of an intrinsic nature. According to Vaibhāṣika atomists, an earth atom, for instance, is a simple substance that possesses the intrinsic nature of solidity. The Sautrāntika school rejected the concept of “substance” entirely. There are numerous reasons for this (most of them epistemological, that I will cover in a subsequent essay), but roughly, it came down to this: We have no evidence of substances/bearers, only qualities. Further, there is no need to posit substances, because everything that needs to be explained can be explained without them. For Sautrāntika philosophers, an earth atom is not a substance that is the bearer of an intrinsic nature “solidity” – rather, it is simply a particular instance of solidity. Thus, in Sautrāntika metaphysics, there are no substances or inherence relations, there are simply quality-particulars. This position is similar to what contemporary metaphysicians call trope theory.

The term “reductionism” is often cause for confusion when used in relation to Abhidharma Buddhism. It must be emphasized that the kind of reductionism relevant here is mereological reductionism. Abhidharma Buddhists were not reductionists in the sense of believing that consciousness could be reduced to material states of the brain. All Abhidharma schools held that among the different kinds of ontological simples, some were irreducibly mental, as opposed to physical.   

Apart from mereological reductionism, the other key aspects of Abhidharma metaphysics are nominalism and atheism. I have covered the Buddhist approach to nominalism in a previous essay, so I will not go over it here. When it comes to atheism, it is important to recognize that Abhidharma Buddhists (like all Buddhists) were only atheistic in a narrow sense. They rejected the existence of an eternal, omnipotent creator of the universe. This did not mean that they were naturalists or that they rejected deities altogether. They believed in many gods, but these gods were not very different from human beings apart from being extraordinarily powerful. Venerating the gods was a means of obtaining temporary benefits in this life or a good rebirth, but the gods could offer no help with the ultimate goal of Buddhist practice: liberation from the cycle of birth and death. The gods themselves, being unenlightened beings, were stuck in the cycle of birth and death. To attain liberation one must seek refuge in the Buddha, the teacher of gods and men.

Works Cited:

Carpenter, Amber. Indian Buddhist Philosophy. Routledge, 2014. Print.  

Siderits, Mark. Buddhism as Philosophy: An Introduction. Ashgate, 2007. Print.

Free Will, Agent Causation, and Metaphysical Naturalism

It’s no longer uncommon for free will to be met with suspicion. This suspicion is even greater when it comes to libertarian free will, and overwhelming regarding agent causation. This belief is largely arrived at via the notion that agent causation or even free will in general is inconsistent with Metaphysical Naturalism. This attitude is mistaken. Here I propose to show that even an agent causal account of action is consistent with Naturalism, which implies that free will in general is. Finally, I’ll close by arguing that at least some people are justified in believing in free will.

1. Naturalism

Metaphysical Naturalism (MN) is a meta-philosophical position regarding the fundamental nature of Being, the world, etc. What it entails is largely debated, but I will be using two definitions that are generally accepted.

MN1: Everything that exists is natural. There are no supernatural entities or forces.

MN2: Reality is exhausted by space-time and its contents, or an ensemble of space-time manifolds.

MN1 is the most common version, but it’s largely uninformative because “natural” is left unaddressed. We’re merely left with picking out paradigmatic supernatural entities/forces such as ghosts, gods, magic, and the like, and asserting that nothing of the sort obtains. I prefer M2, but I think assuming the truth of either one of them is sufficient for what I hope to demonstrate.

2. Free Will

To understand why people assume agent causation is inconsistent with MN, we have to clarify what free will is. First, the will is the capacity to deliberate, make decisions, and translate those decisions into action (Franklin, 2015). I take the folk conception of free will to mean that persons are sometimes able to exercise their will such that they could have done otherwise. That is, at least some decisions aren’t necessitated by their nature and/or environment

More clearly, an action is free only if it satisfies the following conditions:

Sourcehood: The agent is the actual source of ones action (e.g. no manipulation).

Intelligibility: The agent performs actions for reasons that are understood by the agent (e.g. a spontaneous jerk isn’t a free action).

Leeway: The agent is able to refrain from performing the action.

It’s often assumed that naturalism entails determinism, and that determinism is in conflict with the leeway condition, and by this very fact naturalism is in conflict with free will. But this entailment does not hold. There’s nothing about naturalism itself that implies that all causal relations are determinate (necessitated by the relevant antecedent conditions). All that’s required of causality on MN is that nature is causally continuous. Which means that there is only one metaphysical causal kind within the world (i.e. Dualism is false), and that there aren’t external non-natural causal forces affecting the natural world. For these would almost be by definition supernatural. Further, contemporary physics already admits indeterminism in at least six interpretations of quantum mechanics (three remain agnostic, and four are explicitly deterministic). So if one is going to reject free will in virtue of MN, it can’t be because of MN entailing determinism. One might object that indeterministic events don’t take place in higher-level settings, such as the firing of a neuron, so a naturalistic interpretation of human behavior will be deterministic. First, there’s nothing about naturalism in itself that requires this. Second, whether some events in the brain operate indeterministically is an empirical thesis that remains to be settled, and there are already models of how this might work (Tse, 2014; Franklin, 2013; Weber, 2005)

Given what has been outlined above, we can make sense of an event causal libertarian account of free will fitting within MN. In these sorts of instances, one’s mental states cause one to act but in such a way that you could have done otherwise. That is, the features of yourself that cause the action wouldn’t necessitate the action. You could have refrained or performed an altogether different action. It’s also helpful to note that this model fits nicely with the reductive account of mind, where any token mental state is identical to a particular brain state. Most philosophers specializing in free will recognize event causal libertarianism as a possibility worth considering, even if they remain skeptical of its reality (Balauger, 2004, 2010).

3. Agent Causation & Substance Causation

This charitable tone tends to drop once agent causation is proposed. This is typically followed by accusations of anti-scientific and “spooky” metaphysics. This is primarily grounded in the assumption that agent causation implies substance dualism. They can’t imagine what this agent could be besides a disembodied mind that interacts with the body. I think the agent causal picture people have in mind is much like how Kant thought freedom of the will worked. Essentially, the physical world that we experience is fully deterministic. Everything runs like clockwork with the exception of human action. In addition to bodies, persons are also noumenal selves that transcend the empirical world, making sovereign unconstrained choices each time they deliberate and act. So on this picture, the world consists of two different sorts of causes, natural events and agents. Given this sort of description, it’s of little surprise that so few philosophers take agent causation seriously.

Before we contrast the previous description with how agent causation has been recently updated, it will be useful to offer a brief description of what event causation is supposed to be. Event causation essentially involves some complex state of affairs or process causing another. For example, a heart pumping causes the movement of blood or a brick being thrown causes the window’s shattering. Further, the way these events unfold are explained by whatever laws of nature happen to obtain, be they deterministic or probabilistic. Causation cashed out as event relations can either be understood as ontologically primitive or reducible to something more basic such as facts concerning the global spatiotemporal arrangement of fundamental natural properties or sequential regularity.

Timothy O’Conner offers two similar, but philosophically distinct analyses of causation which clearly sketch the relevant difference between event and agent causation (O'Conner, 2014):

Event causal analysis: “The having of a power P by object O1 at time t produces effect E in object O2.”

Agent Causal analysis: “Object O1 produces effect E, doing so in virtue of having power P at time t.”

In the first case it is the “possessing a power”, an event, which is the cause of the effect; in the second it is the object. What’s of crucial importance here is that the agent causal analysis isn’t actually just one of agent causation, but is of the more general theory of substance causation. Substance causation is just the theory that substances or objects are what cause effects. So on this account, it’s not the throwing of the brick that causes the window to shatter; properly speaking, it’s the brick. Now this might sound absurd; how could the throwing of the brick not be a cause of the windows breaking? The absurdity drops once we consider the thrower. Really, the thrower and the brick jointly cause the windows shattering, where the throwing is a manifestation of a power possessed by the thrower. Powers theory is crucial to any plausible theory of substance causation. It’s not merely the object in itself that causes the effect, but the nature of the object that is constituted by the powers it possesses.

Most of the mysteriousness of agent causation disappears once we understand it as a species of substance causation. So take any ordinary substance, a rock, an electron, a water molecule, etc; any time any substance causes an effect on another substance, we have an instance of substance causation. What distinguishes agent causation from ordinary instances of substance causation is that there is an intention behind it. This entails that agent causation is fairly common place within the animal kingdom, which itself is good reason to believe that agent causation is consistent with naturalism.

A robust defense of substance causation is beyond the scope of this paper, but I can briefly sketch some reasons for accepting it. One is the numerous problems with alternative theories of causation. The constant conjunction or sequential regularity theory is currently one of the most popular and has been since Hume proposed it. On this account, for x to cause y is just for it to be the case that every time x occurs, y occurs. So on this view there is no intrinsic or necessary connection between the fire and the smoke that follows; this is just the way the universe happens to unfold. A contentious assumption on this theory is that all instances of causality are temporarily ordered. But we can make sense of non-temporal causation such as two cards propping each other up or a ball making an impression on a pillow that it’s been resting on for eternity(i.e. there was no prior time where ball was not affecting the pillow).

The other popular account reduces causation to counterfactual dependence, which is something like this,

“1) If A had not occurred, B would not have occurred.

2) If A had occurred, B would have occurred.

3) A and B both occurred. “ (Scholastic Metaphysics, pg. 60).

So the throwing of the brick causes the window breaking because if you remove the throwing of the brick then the breaking would not have happened. One problem with counterfactual dependence is the infinite number acts of omission that are involved in any causal sequence. So my successfully walking across the street was dependent on not being crushed by an elephant, not being transported, the earth not blowing up, etc. Another issue that’s applicable to both theories is that both of them seem to get the dependence relation wrong. It’s because of causation that there is constant conjunction and counter factual dependence. They are symptomatic of causation.

Next, here is a simple argument in favor of substance causation: (Whittle, 2016)

1. Some actual substances possess causal powers.

2. If a substance possesses a causal power, then it is efficacious.

3. If a substance is efficacious, then it can be a cause.

4. Some actual substances’ causal powers are manifested.

5. Therefore, some actual substances are causes.

The only premise I can imagine being rejected is (1). On the face of it, this might sound absurd; as if it means that nothing has the power to do anything. Though really the individual who rejects causal powers would have alternative explanations for why things do what they do. A not uncommon answer is that we only need appeal to the laws of nature to understand and explain how events unfold. This is problematic. On one hand, if you take the laws of nature just to be descriptions of regularity, then the laws themselves don’t do any explanatory work. On the other hand, if you take the laws of nature to be something that dictates and enforces the activity of things from the outside, then you’ve committed yourself to a form of platonism, where naturalism must be rejected. Finally, you can take the laws themselves to be the causal implications of the intrinsic natures that the substances possess, and in that case we’re back to powers theory.

4. Metaphysical Irreducibility

One might object to my earlier claim that agent causation is fairly common place because in reality there are no agents, merely matter in motion or atoms in the void.This is where the possible reducibility of macro-level objects becomes an issue. So a largely reductionist metaphysics will hold that much of what we consider ordinary objects are nothing over and above their parts. So what they are is wholly reducible to a set of fundamental constituents and relations. Another way to think of about this is that if we were to take an inventory of everything that really exists, much of what we take to exist would turn out to not. At its most extreme, the reductionist thesis holds that there’s nothing over above quarks, bosons, or whatever a complete theoretical physics takes to be fundamental. Ordinary objects will be described as simples (indivisible physical objects) arranged in a particular way. So to be a cat is just to be simples arranged cat-wise.

If one were both a reductionist and a substance causation theorist, then one could rightfully reject agent causation because there would be no agents in the relevant sense. In order for agent causation to obtain, the agent has to be a unique substance that’s not merely the sum of its parts. If agent causation were true, then agents would be irreducible substances whose persistence conditions are picked out by their higher-level causal powers(e.g. Purposiveness, narrativity, & self-reflection). That is, we are unique irreducible substances because we possess capacities that aren’t exemplified by our constituents. The constituents have come together in the right way; they are not merely a collection of them. A unique form is exemplified that puts constraints on the activity of its lower-level constituents. Which is an example of top-down causation if anything is. On reductionist substance causation, the lower level substances do all of the causal work.

A possible strategy for motivating a non-reductionist account mirrors the demystifying of agent-causation. That is, if irreducible objects aren’t special cases that are essentially restricted to persons, then there’s less reason to be suspicious of irreducibility in general. This does not mean that I think that all ordinary objects are irreducible substances. I take objects of artifice to be clearly reducible to their chemical constituents. So houses, cars, computers, tools, etc are reducible to their constituent parts. Edward Feser offers a clear description of the distinction I have in mind,

The basic idea is that a natural object is one whose characteristic behavior – the ways in which it manifests either stability or changes of various sorts – derives from something intrinsic to it. A nonnatural object is one which does not have such an intrinsic principle of its characteristic behavior; only the natural objects out of which it is made have such a principle. We can illustrate the distinction with a simple example. A liana vine – the kind of vine Tarzan likes to swing on – is a natural object. A hammock that Tarzan might construct from living liana vines is a kind of artifact, and not a natural object. The parts of the liana vine have an inherent tendency to function together to allow the liana to exhibit the growth patterns it does, to take in water and nutrients, and so forth. By contrast, the parts of the hammock – the liana vines themselves – have no inherent tendency to function together as a hammock. Rather, they must be arranged by Tarzan to do so, and left to their own devices – that is to say, without pruning, occasional rearrangement, and the like they will tend to grow the way they otherwise would have had Tarzan not interfered with them, including in ways that will impede their performance as a hammock. Their natural tendency is to be liana-like and not hammock-like; the hammock-like function they perform after Tarzan ties them together is extrinsic or imposed from outside, while the liana-like functions are intrinsic to them” (Scholastic Metaphysics, pg. 182)

I don’t commit myself to the idea that all natural particulars are irreducible or simple (without parts) or that only objects of human construction are reducible. For example, a rock made of limestone would reduce to a collection calcium carbonate, that may or may not have an irreducible intrinsic nature. The correct account of reduction/non-reduction relation is a severely under-explored issue in metaphysics. The hope here is merely that this example is useful in communicating an idea of what an irreducible relation/substance is supposed to be.

5. Final Arguments

Before summing up the arguments, it’ll be useful to explain what sort of advantage an agent causal account of freedom has over an event causal one. It stems from what’s called the “disappearing agent” objection to event causal libertarianism. The idea is that on the event causal analysis the agent-involving events (the particular mental states, preferences, reasons, etc) that non-deterministically cause the decision don’t actually settle which option is selected. The leeway condition is satisfied in that we could roll back the event and you could have otherwise but you, yourself don’t actually choose it. Your agent-involving states merely constrain which options are possible for you. Where it goes from there is a matter of luck. This can be thought of as claiming that an event causal view doesn’t satisfy the sourcehood condition for free will. The events, which do the work, merely flow through you, but you don’t really settle which option occurs. Agent causal theories have the advantage of saying that you certainly do play an explanatory role.

With this work behind us, we can abridge the essential story into a few brief arguments.

1. Substance Causation is consistent Naturalism.

2. The metaphysical irreducibility of certain substances (persons among them) is consistent with Naturalism.

3. If (1 & 2), then agent-causation is consistent with Naturalism.

4. Therefore, Agent Causation is consistent with Naturalism.

I think 1 and 2 are fairly straightforward in that nothing about my description of them implied that they transcend space and time, and 3 isn’t much more than the definition of agent causation.

Next,

1. The leeway condition is consistent with Naturalism (i.e. Nothing about naturalism implies that all causation is deterministic or that all causally relevant neural sequences are deterministic).

2. The sourcehood condition is consistent with Naturalism (since the most demanding form of satisfying it (agent causation) is consistent with Naturalism).

3. The intelligibility condition is consistent with Naturalism (I can’t say much more than I’d be completely puzzled if someone denied this, beyond maybe saying that all of our reasons for action are post hoc confabulations).

4. If (1,2 & 3), then Free Will is consistent with Naturalism (A priori true).

5. Therefore, Free Will is consistent with Naturalism.

Finally,

1. Substance causation is a plausible theory of causation.

2. The irreducibility of certain biological substances is not implausible.

3. Indeterminism is plausible.

4. If (1,2, & 3), then free will is plausible.

5. We’re justified in holding independently plausible positions if they cohere with our background beliefs*.

6. Therefore, at least some people are justified in believing in free will.

Plausible: A position is plausible just in case it is coherent, contains sophisticated arguments or evidence in favor of it (ones that are aware of and address the relevant issues and objections that might undermine it) and is void of any obvious insurmountable objections.

*Epistemic axiom: We’re justified in believing what seems to be true unless we have sufficient reason to think it’s false.

*Phenomenological claim: Some of our decisions seem to be free, to at least some of us.

Without question, this is the weakest of the arguments I’ve offered. Plausibility is context dependent, which means many will find this unconvincing. Some of the most obvious candidates are committed reductionists, scientismists, eliminativists, determinists, and event causal theorists. Though this is not my target audience. My hope is that fence sitters, or anyone who’s just generally skeptical yet open to free will and agent causation might be persuaded to take the position seriously. No one should be moved to believe in free will merely based on what I’ve offered here, but it might be sufficient to motivate some to re-assess their position.

Works Cited:

Balaguer, Mark. Free Will as an Open Scientific Problem. MIT Press, 2010

Balaguer, Mark. A Coherent, Naturalistic, and Plausible Formulation of Libertarian Free Will. Noûs, Vol. 38, No.3 (Sep., 2004), pp. 379-406

Feser, Edward, “Scholastic Metaphysics: A Contemporary Introduction”, Editiones Scholasticae, 2014

Franklin, Christopher Evan, “Agent-Causation, Explanation, and Akrasia: A Reply to Levy’s Hard Luck”, Criminal Law and Philosophy 9:4, (2015): 753-770.

Franklin, Christopher Evan, The Scientific Plausibility of Libertarianism’, Free Will and Moral Responsibility, eds. Ishtiyaque Haji and Justin Caouette. Newcastle upon Tyne: Cambridge Scholars Publishing (2013): 123-141.

 O’Conner, Timothy. “Free Will and Metaphysics,” in David Palmer, ed., in Libertarian Free Will (ed. D. Palmer, Oxford), 2014

Tse, Peter Ulric, Neural Basis of Free Will: Criterial Causation, MIT Press, 2013, 456pp.

Webber, Marcel. Indeterminism in Neurobiology. Philosophy of Science, Vol. 72, No. 5, Proceedings of the 2004 Biennial Meeting of The Philosophy of Science AssociationPart I: Contributed PapersEdited by Miriam Solomon (December 2005), pp. 663-674

Whittle, A. (2016). A Defence of Substance Causation. Journal of the American Philosophical Association , 2(1), 1-20. DOI: 10.1017/apa.2016.1

Why Veganism isn't Obligatory

I’ve written a bit about animal ethics on this blog, and most of it has been about animal rights. The sorts of rights that seem most plausible to ascribe to animals are negative rights, such as the right not to be unjustly harmed. If animals have rights, they probably have positive rights as well. For example, if you’re cruising around on your new boat with your dog, and you see that your dog fell overboard, it seems like your dog has the right to be rescued by you, assuming that you’re capable without endangering yourself or others. You’re obligated to rescue your dog, assuming that he has rights that can generate obligations for you. So, animals can have both positive and negative rights.

An interesting question that arises when we consider animal rights is if they generate obligations for us to become vegans. I take veganism to be a set of dietary habits that exclude almost all animal products. On my view, vegans can consume animal products in very specific situations. For example, if a vegan comes across a deer that has just died by being hit by a car, it is permissible for her to consume the deer and use its parts for whatever purposes she sees fit. However, circumstances like the dead deer are very rare and it’s doubtful that most vegans can survive off of those sorts of animal products, so most vegans will not consume any animal products. The sorts of vegans who hunt for the sorts of opportunities to consume animal products like in the case of roadkill are called, “freegans”. Other instances of vegan-friendly animal products are things found in the trash and things that have been stolen.

Most vegans would agree that purchasing chickens for your backyard and consuming the eggs they produce is impermissible. If they think animals have rights, then having backyard chickens might seem akin to owning slaves. In both instances, beings with rights are considered the property of people. So, owning chickens is a form of slavery according to this view. I want to challenge this view by using some arguments developed in a recent paper called, “In Defense of Backyard Chickens” by Bob Fischer and Josh Milburn.

Imagine that a person, call her Alice, studied chicken cognition and psychology such that she understood the best way to house chickens according to their needs. She builds the right sort of housing for chickens, she purchases high quality, nutritious feed for her chickens, and she makes sure they are safe from predators and the elements. Alice really cares about animal welfare, so her project is done in the interests of the chickens she plans to buy. She sees herself as giving the chickens a life they deserve in an environment best suited for their welfare. She then goes and buys some chickens and lets them loose in their new home. She tends to their needs and makes sure they’re comfortable. She then collects their eggs they lay and consumes them in various ways. I don’t think Alice done anything wrong, but some vegans may disagree.

To some vegans, it may seem like Alice has built slave quarters for her new egg-producing slaves. However, it seems to me that Alice has liberated the chickens in a way that’s analogous to an abolitionist buying the freedom of an enslaved human. If it’s permissible to buy the freedom of a slave by paying into an unjust institution like the slave-trade, then it seems like the same holds for buying the freedom of chickens. But, you may object, the chickens aren’t free! They’re still enclosed in Alice’s backyard, unable to leave. If you bought the freedom of a human and then put them in a backyard enclosure, we could hardly praise you as a liberator! Well, in the case of humans it’s wrong to force them into backyard enclosures. But that’s because the interests of humans are such that we make humans worse off by forcing them into enclosures in backyards. Humans aren’t the sorts of beings that need restrictions on their movement to guarantee their well-being. If anything, humans need free movement to have a high level of well-being. One of the reasons human slavery is so bad is because of the restriction on the freedom of movement of humans. Humans enjoy being able to go where they want; preventing that is to harm them.

When it comes to chickens, restricting their movement is actually in their interests. If we bought chickens and then just let them loose, they would probably die pretty quickly. Depending on where you release them and what time of the year it is, they could die of exposure or from predation. They could also walk into traffic and die, or they might end up starving because they won’t be able to find adequate nutrition. So, it seems like chicken interests don’t include complete freedom of movement, but rather some level of confinement for protection. Obviously not the level of confinement found in factory farms or even smaller commercial farms, but something that keeps predators and the elements out. So, the analogy between confining chickens and confining humans doesn’t hold, because it is in the interests of chickens and not humans to be confined to some extent.

One objection that might arise is that by buying chickens, Alice feeds into an unjust system that will only be perpetuated by your actions. Fair enough I guess, but it seems like the act of purchasing a few chickens is causally impotent with respect to furthering the unjust system of selling chickens for profit. If Alice didn’t buy those chickens, I doubt the store would have felt it, and the industry at large definitely wouldn’t feel it. The chickens probably would’ve been bought by somebody else, anyway, and they probably wouldn’t have been treated nearly as well as if Alice had bought them. But leaving that aside, this seems like a consequentialist objection. However, we’re in the land of the deontic with all of this rights talk, and it seems like chickens have a right to be rescued from their circumstances. So even if Alice somehow feeds into an unjust system by buying her chickens, that badness is outweighed or overridden by the right to rescue that those chickens have. If anything, Alice has an obligation to buy those chickens, given her ability to provide them with the lives to which they are entitled.

Another objection is that by purchasing chickens, Alice is treating them as property. Even if that’s true, it still seems better for the chickens that they are treated like property by Alice than by somebody less interested in their welfare. The chickens may have a right not to be owned, and perhaps Alice’s relationship to them is one of an owner, but it may still be in their interests to be owned by Alice. Their right not to be owned is outweighed by the potential harm they will experience if they’re bought by anybody else. Alice is their best bet. However, it is unclear that Alice is treating them as property. Another way of looking at this is Alice is buying the freedom of the chickens. They will no longer be the property of others. Instead, they get to live out their lives in the best conditions chickens can have. Now, you might respond by saying that living in Alice’s backyard isn’t true freedom because the chickens’ movement is restricted, but I already dealt with that objection above.

One last objection is that by obtaining and consuming eggs, Alice is illegitimately benefiting from something she’s allowed to do. This objection concedes that Alice can keep backyard chickens as long as she tends to their well-being sufficiently. But, the objection goes, Alice is illegitimately benefiting from her chickens. Perhaps the chickens also have a right to raise families, and by consuming their eggs Alice is depriving them of families. However, Alice could allow the chickens to procreate within limits. Obviously they cannot overpopulate the land they inhabit, because that would cause an overall decrease in well-being. In light of these considerations, Alice cannot allow every egg to result in a new chicken, so it seems like she can remove excess eggs from the chickens’ homes.

Maybe the chickens have property rights over their eggs. By taking the eggs, Alice is effectively stealing from her chickens. It isn’t clear to me that animals have property rights, but maybe they do. Even if the chickens own their eggs, it seems like Alice can collect some of them as a form of rent. There is, then, mutual benefit between Alice and the chickens. Alice gives the chickens a place to live and food, and in return Alice gets some of their eggs. The relationship between Alice and her chickens is closer to people renting a place to live and their landlord than it is to a thief and her victims, or squatters and a landowner.

Could the eggs be used for something more noble than as Alice’s food? Maybe, but it still seems permissible for Alice to eat the eggs. Sure, she could donate them or use them to feed other animals, but it seems like a stretch to say that Alice has an obligation not to consume the eggs and instead give them away. Even if it’s better that she gives them away, she’s still allowed to consume them. There are actions that are permissible even if they aren’t optimal, and Alice consuming the eggs seems to qualify.

If I’m right, and Alice is allowed to consume the eggs she collects, then Alice is not obligated to be a vegan. Eggs are animal products and pretty much every vegan would say that you shouldn’t eat them. So, it seems like veganism is not obligatory. Consuming animal products can sometimes be permissible if they’re obtained in the right way.

This post has been heavily influenced by a recent paper by Bob Fischer and Josh Milburn. Their paper articulated a lot of the thoughts I’ve had about veganism and moral obligations better than I could. Pretty much all of the arguments, objections, and responses draw from their paper. I wrote this post to summarize some of their arguments, and to draw attention to their paper. Bob Fischer is my favorite philosopher working on animal ethics. I recommend all of his stuff.

Check out their paper here.
Check out Bob Fischer’s work here.

Śrīharṣa’s Master Argument Against Difference

The Advaita Vedānta tradition is one of the most popular and influential Indian philosophical systems. The best translation of the Sanskrit word advaita is “non-dual.” The thesis of Advaita is that reality is at bottom non-dual, that is, devoid of multiplicity. Advaita recognizes that our everyday experience presents us with of a plurality of objects, but maintains that the belief that plurality and difference are fundamental features of the world is mistaken. The ultimate nature of reality is undifferentiated Being. Not being something, but Being itself – Pure Being. The phenomenal world, in which we experience Being as separate beings is not ultimately real. It is constructed by avidya – ignorance of the true nature of reality. We are beings alienated from Being, and true liberation lies in ending this alienation.

One of the reasons offered by Advaitins for accepting these claims is that they form the most plausible and coherent interpretation of the Upaniṣads – scriptures accepted as being a reliable source of knowledge. But this will hardly convince someone who does not already acknowledge the authority of the Upaniṣad. Here, the strategy of Advaita philosophers has typically been to go on the offensive and argue that the very notion of “difference” or “separateness” is in some sense conceptually incoherent. The arguments for this claim were first formally compiled by the 5th century philosopher Maṇḍana Miśra. Subsequent philosophers in the Advaita tradition further developed, defended and extended these arguments. In this essay, I will briefly go over the master argument against difference presented by the twelfth century philosopher Śrīharṣa in his magnum opus, Khaṇḍanakhaṇḍakhādya (“The Sweets of Refutation”).

Śrīharṣa begins his inquiry by asking what “difference” really is. He identifies four possible answers to this question:

  1. Difference is the intrinsic nature of objects.
  2. Difference consists in the presence of distinct properties in objects.
  3. Difference consists in the mutual non-existence of properties in objects.
  4. Difference is a special property of objects.

Śrīharṣa considers each option in turn, and finds them all untenable.

The claim that difference is the intrinsic nature of objects is rejected because difference is necessarily relational. To state that bare difference is the nature of X is to utter something meaningless. At best, we can say that difference-from-Y is the intrinsic nature of X. However, this raises another problem. To describe the intrinsic nature of X is to describe what X is in and of itself, independent of anything else.  In contrast, the very notion “difference-from-Y” indicates a dependence on Y. We have arrived at a contradiction: if X has an intrinsic nature that is parasitic on the nature of Y, then it follows that X doesn’t really have an intrinsic nature.

Śrīharṣa offers a subsidiary argument to drive home the implausibility of the view that that difference is the intrinsic nature of an object. Consider a blue object and a yellow object. An object that is blue by its very nature does not depend on the yellowness of the other object. Even if all the yellow objects in the world were to disappear, the blue object would still be blue. But this could not be the case if difference-from-yellow-objects was the intrinsic nature of the blue object.

According to the second definition of difference, X is different Y if distinct properties are present in X and Y. X and Y can be any two objects, but we may use Śrīharṣa’s example: A pot is different from a cloth because the property potness is present in the pot, while the property clothness is present in the cloth. But this raises an obvious question: what makes potness different from clothness? The answer cannot be (1) – that is, that difference is the very nature of potness and clothness – because that view has already been refuted. If we answered the question with (2), then we would end up saying that what makes potness different from clothness is that potness is itself possesses a property that clothness does not. We would have to maintain that potness-ness is present in potness, and clothness-ness is present in cloth-ness. Even if we ignore the oddness of properties being present in other properties, we can raise another question: What makes potness-ness different from clothness-ness? This series of questions could go on indefinitely, generating an infinite regress. Hence, this option is unsatisfactory. 

Śrīharṣa considers the possibility that difference consists in the mutual non-existence of properties in objects. According to this view, what makes a pot different from a cloth is the absence of potness in the cloth, and the absence of clothness in the pot. But much like before, this raises the question of what makes potness different from clothness. It cannot be (1) or (2), because they have already been refuted. If we bring up (3) here, we would have to say that what makes potness different from clothness is the absence of potness-ness in clothness, and the absence of clothness-ness in potness. At this point, much like before, we could ask what makes potness-ness different from clothness-ness. Once again, we are left with an infinite regress.

This brings us to the final option: that difference is a special property of an object. According to this view, difference-from-Y is itself an attribute of X. But if difference-from-Y is an attribute of X, then difference-from-Y is not X itself, but something different from X. This entitles us to ask what makes the attribute difference-from-Y different from X. It cannot be (1), (2) or (3), so it must be (4). This would mean that it must be another attribute that makes difference-from-Y different from X. But then this attribute itself would be different from both X and difference-from-Y, which simply raises the same question. One more, we see an infinite regress looming.

Having rejected all four possibilities, Śrīharṣa concludes that the very notion of difference is incoherent, and so it cannot be a true feature of the world. A typical reaction to Śrīharṣa’s arguments is that there must be something wrong with them – indeed, something obviously wrong with them. But it isn’t necessarily straightforward to identify what exactly it is. One could question whether Śrīharṣa really has considered all the possible options, whether some of these options really lead to an infinite regress, and finally, whether an infinite regress is something to be worried about. Philosophers from rival traditions adopted all these approaches. Śrīharṣa and his successors anticipated and responded to a number of these objections. They also modified and extended the arguments against difference to more specific cases, to show that differentiating cause and effect, moments in time, and subject and object, were all impossible. For a thorough examination of Śrīharṣa’s critique of difference, Phyllis Granoff’s Philosophy and Argument in Late Vedānta is a good place to start.  

Why Verificationism isn't Self-Refuting

In the early to mid Twentieth Century, there was a philosophical movement stemming from Austria that aimed to do away with metaphysics. The movement has come to be called Logical Positivism or Logical Empiricism, and it is widely seen as a discredited research program in philosophy (among other fields). One of the often repeated reasons that Logical Empiricism is untenable is that the criterion the positivists employed to demarcate the meaningful from the meaningless, when applied to itself, is meaningless, and therefore it refutes itself. In this post, I aim to show that the positivists’ criterion does not result in self-refutation.

Doing away with metaphysics is a rather ambiguous aim. One can take it to mean that we ought to rid universities of metaphysicians, encourage people to cease writing and publishing books and papers on the topic, and adjust our natural language such that it does not commit us to metaphysical claims. Another method of doing away with metaphysics is by discrediting it as an area of study. Logical Positivists saw the former interpretation of their aim as an eventual outgrowth of the latter interpretation. The positivists generally took their immediate goal to be discrediting metaphysics as a field of study, and probably hoped that the latter goal of removing metaphysics from the academy would follow.

Discrediting metaphysics can be a difficult task. The positivists’ strategy was to target the language used in expressing metaphysical theses. If the language that metaphysicians employed was only apparently meaningful, but underneath the surface it was cognitively meaningless, then the language of metaphysics would consist of meaningless utterances. Cognitive meaning consists of a statement being truth-apt, or having truth conditions. If a statement isn’t truth-apt, then it is cognitively meaningless, but it can serve other linguistic functions besides assertion (e.g. ordering somebody to do something isn’t truth-apt, but it has a linguistic function).

If metaphysics is a discourse that purports to be in the business of assertion, yet it consists entirely of cognitively meaningless statements, then it is a failure as a field of study. But how did the positivists aim to demonstrate that metaphysics is a cognitively meaningless enterprise? The answer is by providing a criterion to demarcate cognitively meaningful statements from cognitively meaningless statements.

The positivists were enamored with Hume’s fork, which is the distinction between relations of ideas and matters of fact, or, in Kant’s terminology, the analytic and the synthetic. The distinction was applied to all cognitively meaningful statements. So, for any cognitively meaningful statement, it is necessarily the case that it is either analytic or synthetic (but not both). The positivists took the criterion of analyticity to be a statement’s negation entailing a contradiction. Anything whose negation does not entail a contradiction would be synthetic. Analytic statements, for the positivists, were not about extra-linguistic reality, but instead were about concepts and definitions (and maybe rules). Any claim about extra-linguistic reality was synthetic, and any synthetic claim was about extra-linguistic reality.

Synthetic statements were taken to be cognitively meaningful just if they could be empirically confirmed. The only other cognitively meaningful statements for the positivists were analytic statements and contradictions. This is an informal statement of the verificationist criterion for meaningfulness. Verificationism was the way that the positivists discredited metaphysics as a cognitively meaningless discipline. If metaphysics consisted of synthetic statements that could not be empirically confirmed (e.g. the nature of possible worlds), then metaphysics consisted of cognitively meaningless statements. In short, the positivists took a non-cognitivist interpretation of the language used in metaphysics.    

Conventional wisdom says that verificationism, when applied to itself, results in self-refutation, which means that the positivists’ project is an utter failure. But why does it result in self-refutation? One reason is that it is either analytic or synthetic, but it doesn’t appear to be analytic, so it must be synthetic. But if the verificationist criterion is synthetic, then it must be empirically confirmable. Unfortunately, verificationism is not empirically confirmable, so it is cognitively meaningless. Verificationism, then, is in the same boat with metaphysics.

Fortunately for the positivists, the argument above fails. First off, there are ways to interpret verificationism such that it is subject to empirical confirmation. Verificationism could express a thesis that aims to capture or explicate the ordinary concept of meaning (Surovell 2013). If it aims to capture the ordinary concept of meaning, then it could be confirmed by studying how users of the concept MEANING could employ it in discourse. If such concept users employ the concept in the way the verificationist criterion says it does, then it is confirmed. So, given that understanding of verificationism, it is cognitively meaningful. If verificationism aims to explicate the ordinary concept of meaning, then it would be allowed more leeway when it deviates from standard usage of ordinary concept in light of its advantages within a comprehensive theory (Surovell 2013). Verificationism construed as an explication of the ordinary concept of meaning, then, would be subject to empirical confirmation if the overall theory it contributes to is confirmed.

Secondly, if one takes the position traditionally attributed to Carnap, then one can say that the verificationist criterion is not internal to a language, but external. It is a recommendation to use language in a particular way that admits of only empirically confirmable, analytic, and contradictory statements. Recommendations are not truth-apt, yet they serve important linguistic functions. So, verificationism may be construed non-cognitively, as a recommendation motivated by pragmatic reasons. There’s nothing self-refuting about that.  

Lastly, one could take verificationism to be internal to a language, in Carnap’s sense, and analytic. However, the criterion would not aim to capture the ordinary notion of meaning, but instead it would be a replacement of that notion. Carnap appears to endorse this way of construing verificationism in the following passage,

“It would be advisable to avoid the terms ‘meaningful’ and ‘meaningless’ in this and in similar discussions . . . and to replace them with an expression of the form “a . . . sentence of L”; expressions of this form will then refer to a specified language and will contain at the place ‘. . .’ an adjective which indicates the methodological character of the sentence, e.g. whether or not that sentence (and its negation) is verifiable or completely or incompletely confirmable or completely or incompletely testable and the like, according to what is intended by ‘meaningful’” (Carnap 1936).

Rather than documenting the way ordinary users of language deploy the concept MEANING, Carnap appears to be proposing a replacement for the ordinary concept of meaning. The statement of verificationism is internal to the language in which expressions of meaning are replaced with “a . . . sentence of L” where ‘. . .’ is an adjective that indicates whether or not the sentence is verifiable, and thus is analytic in that language. The motivation for adopting verificationism thus construed would then be dependent on the theoretical and pragmatic advantages of using that language.

So, verificationism can be construed as synthetic, analytic, or cognitively meaningless. It could be considered a recommendation to use language in a certain way, and that recommendation is then motivated by pragmatic reasons (or other reasons), which makes it cognitively meaningless but linguistically useful, which does not result in self-refutation. Or, it could be considered a conventional definition aimed to capture or explicate the ordinary concept of meaning. It would then be verifiable because it could be confirmed by an empirical investigation into the way people use the ordinary notion of meaning, or by its overall theoretical merits. Lastly, it could be internal to a language, and thus analytic, but not an attempt at capturing the ordinary notion of meaning. Instead, it would be a replacement that served a particular function within a particular language that is itself chosen for pragmatic (non-cognitive) reasons. In any of these construals, verificationism is not self-refuting.

Works Cited:

Carnap, Rudolf. "Testability and Meaning - Continued." Philosophy of Science. 1936. Web.

Surovell, Jonathan. "Carnap’s Response to the Charge that Verificationism is Self-Undermining." 2013. Web.

 

An Introduction to Morality and Emotions

When doing moral theory, the question of emotion will inevitably arise. Some theorists think that emotions should not play any role because they are antithetical to reliable moral reasoning. Others doubt that emotions are a wholly distorting influence. In this post, I’m going to lay out some ways in which emotions may feature in our theorizing about morality.

A popular view of emotion is to take them as intentional states that present their objects in an evaluative light. For instance, being happy about graduating from college is to have the state of affairs of graduating from college being presented to a subject such that she has certain positive feelings towards it. The way in which this view of emotion can be relevant to moral theorizing is when the object of emotion is a moral state of affairs. Your emotions get moralized in this sense when they are about moral states of affairs.

Another way in which emotions are relevant to morality is if they provide us access to moral facts. If emotions are our means of epistemic contact with moral reality, then emotions are epistemically relevant to morality. Emotions may then be ways of representing states of affairs with a certain sensitivity to morally salient features of what’s being represented. One simplistic possibility is that our emotional reaction to the idea of pushing a man off a bridge to stop a train that is headed for five people tied to the track provides us with epistemic access to the separateness of persons, which explains why it’s wrong to push the man to his death.

However, there may be a flip-side to the epistemic view of emotions. Emotions could also distort our sensitivity to morally salient features of states of affairs. Peter Singer has defended a view similar to this when he argued that deontological intuitions are subject to distorting influences rooted in our evolutionary development.

Emotions can also be the ways in which we are motivated to act morally. It could be the case that we need emotions to move us to act morally, which would make emotions necessary for moral action. On this view, a robot with the set of true moral beliefs would be unmoved to act on them if it is incapable of experiencing emotions. Mere belief is insufficient on this account of moral emotions.

We may also be subject to evaluation based on the emotions we experience. There are clearly good and bad ways to behave at a funeral. If somebody began laughing uncontrollably, we would probably consider that to be inappropriate, whereas we would be tolerant of grieving in the form of loud crying. A similar view is defended by Justin D’Arms and Daniel Jacobson.

One last way that emotions can be relevant to moral theorizing is if they are integral to our moral development. Perhaps eliciting certain emotions is a necessary means of moral education. Making developing moral agents experience things like guilt over wrongdoing by pointing out how they’ve let a loved one down could be formative for them. In this sense, emotions are part of the development of moral agents.

There are probably other ways in which emotions are morally relevant that I’ve missed. If you are aware of any more, let me know in the comments section below.

A Problem for the New Consequentialism

In a previous post, I outlined a non-deontic form of consequentialism that was supposed to avoid what I called the extension problem. The extension problem plagues deontic consequentialism, which is the view that the rightness, wrongness, permissibility, and impermissibility of actions are determined by their consequences. So, a simple hedonistic act utilitarian will say that there is one categorically binding duty, and that is to maximize pleasure when we act. But such a view suffers from intuitively compelling counterexamples. So it seems like hedonistic act utilitarianism gets the extension of our deontic concepts wrong.

Non-deontic consequentialism is designed to avoid the extension problem, because it defers how those concepts are applied by a society at a given time. By doing so, the theory allows for the extensions of our deontic concepts to pick out what our society takes them to be, which seems to preserve our intuitions about particular cases, like the drifter being killed by a surgeon for his organs. Hedonistic act utilitarianism requires that, if the surgeon is in the epistemic situation where he can rule out negative consequences, and he knows that he can use these organs to save five patients, then he is duty-bound to kill the drifter and harvest the organs. Non-deontic consequentialism avoids this because your typical person who is not a thoroughly committed act utilitarian would not agree that the extension of DUTY covers the surgeon’s organ harvesting endeavor.

An alternative that avoids the extension problem is scalar utilitarianism, which does without deontic concepts like RIGHT and WRONG. Instead, we judge actions as better or worse than available alternatives. The problem with this view is that it just seems obvious that it is wrong to torture puppies for fun. But a scalar utilitarian cannot give an adequate account of what makes that act wrong, so she must explain why it seems so obvious to say that it is wrong to torture puppies, even though it’s false.

Setting aside both of these forms of consequentialism, I want to discuss the non-deontic consequentialism I outlined in my other post. On the view I described, the rightness and wrongness, along with other deontic properties, of actions are a function of the social conventions that obtain at a given time in a given society. The consequentialism comes in at the level of critiquing and improving those social conventions.

Moral progress occurs when we adopt social conventions that are better by consequentialist standards. So, for instance, it used to be a social convention in the United States that we could have property rights over other human beings, and transfer those rights for currency. Those conventions are no longer in place in the United States, and at the time they were, they could have been critiqued by consequentialist standards. Those conventions were not better than available alternatives at the time, so it would have been better not to have the institution of chattel slavery. But these facts about betterness do not determine what is right or wrong. Rather, they should guide efforts to improve social conventions, and thereby change the extensions of our deontic concepts.

This seems all well and good, but I am a bit worried. This view entails that social conventions have normative force, no matter what. So, just because something is a social convention, we thereby have at least some moral reason to abide by it. Take slavery again; such an institution was once enshrined in many social conventions. Does it follow that at the time, everybody had at least some moral reason to abide by the conventions that said we ought to return escaped slaves to their so-called owners? It seems to me that slavery is and always was wrong. There was never a time at which it was right to own another human being. I think that the basis of my concern is that deontic judgments, especially when applied to important things like slavery, are not indexed to times and places. Just because a human being is sold in a marketplace in 1790 Virginia does not change the deontic status of the situation. What exactly is the morally relevant difference between that time period and today? Why is it wrong now to sell another human being but it was not in 1790s Virginia?

One potential response to my worries is to point out that I’m making these judgments from a particular time period when the extension of our deontic concepts rules out slavery being permissible. So, perhaps I find the entailment of this theory appalling because my intuitions are shaped by the extension of the deontic concepts I use. Since 1790s Virginia, we have undergone moral progress, and now it is wrong to own slaves because of the shift in social conventions. It could even be that according to our deontic concepts’ extensions now, it was wrong in the 1790s to buy and sell slaves.

I think these considerations certainly make my concerns less worrisome. But I’m experiencing a residual anxiety. It still seems counterintuitive to say that, if we had grown up in 1790s Virginia, our claims about the rightness and wrongness would be flipped. We would have an inverted moral spectrum when it comes to deontic judgments about slavery. That is what I find counterintuitive. The theory was developed to explicitly address the extension problem, which was that deontic consequentialists seem to get the extensions of our deontic concepts wrong. The reason I think that they get those extensions wrong is because their theories entail counterintuitive results. They end up having to bite a lot of bullets, such as the organ harvesting surgeon. But if non-deontic consequentialism also generates counterintuitive entailments, like slavery being permissible in 1790s Virginia for people at that time, then is it any better than its deontic consequentialist competitors?




 

Buddhist Apoha Nominalism

The Problem of Universals is one the oldest subjects of debate in Indian philosophy. Realists about universals believe that universals exist in addition to concrete particulars, while nominalists deny the existence of universals. The Nyāya and Mīmāṃsā schools were vocal defenders of realism. Nyāya philosophers believed in universals for a number of reasons:

  • Universals explain how different objects share common characteristics. Cow A and Cow B differ from each other in various ways, and yet we recognize that they’re both cows. The Nyāya explanation for this is that what Cow A and Cow B have in common is the universal “cowness” that inheres in both.
  • Universals fix the meanings of words. The word “cow” doesn’t just refer to a particular cow, but cows in general. How can a word refer to many different objects at once? The Nyāya solution is that the word “cow” refers to a particular qualified by the universal cowness, which is present in all individual cows.
  • Universals are a solution to the Problem of Induction, first raised by the Cārvāka empiricists. Nyāya philosophers viewed the laws of nature as relations between universals. Our knowledge of these universals and the relations between them justifies inductive generalizations, and consequently, inferences such as the presence of fire from the presence of smoke.

Buddhists were the best-known nominalists in the Indian philosophical tradition. The Buddhist hostility towards universals is perhaps best expressed by Paṇḍita Aśoka (9th century): “One can clearly see five fingers in one’s own hand. One who commits himself to a sixth general entity fingerhood, side by side with the five fingers, might as well postulate horns on top of his head.”¹

In this post, I will briefly go over how Buddhists responded to the first two reasons for believing in universals provided by the Nyāya school. The Buddhist defense of induction will have to be the subject of a separate essay.

The form of nominalism Buddhists advocated is called apoha, the Sanskrit word for “exclusion.” The first precise statement of apoha nominalism can be found in the works of Dignāga (6th century). Dignāga claimed that the word “cow” simply means “not non-cow.” Since there is obviously no universal “not cow-ness” present in every object that is not a cow, this semantic view doesn’t commit us to the existence of universals. Every cow is a unique particular distinct from all other objects. We simply overlook the mutual differences between cows and group them together based on how they’re different from non-cows.  Thus, it’s not because cows share something in common that we call them by the same name. Rather, we think all cows share something in common because we have learned to call them by the same name.

There are some objections that immediately spring to mind, and Nyāya and Mīmāṃsā philosophers brought them up repeatedly in their criticisms of apoha nominalism. First, how does saying that “cow” means “not non-cow” provide a solution to the problem of universals? “Not non-cow” involves a double negation, so to say “cow” means “not non-cow” is just to say “cow” means “cow.” This leads us right back to where we started, and just as before, it seems that we need to posit a universal cowness. Second, how can we focus on cows’ common differences from non-cows unless we already know how to tell what a cow is in the first place? Once again, we seem to have gone in a circle, and apoha seems to presuppose precisely what it was supposed to explain.

Dignāga’s successors responded to the first objection by drawing distinctions between different kinds of negation. Consider the statement: “This is not impolite.” Now, at first glance it might seem like this just translates to “This is polite,” because of the double negation involved in “not impolite.” But this is not necessarily the case. The statement could be about something to which the very category of politeness does not apply, in which case “not impolite” is distinct from “polite.” Thus, “not non-cow” can mean something genuinely different from “cow.”

To understand how Buddhists responded to the second charge of circularity, it helps to look at another Buddhist view. Buddhists were mereological reductionists: they did not believe that wholes were anything over and above their parts. So, a table, for instance, is nothing more than its parts arranged table-wise. The “table” is just a conceptual fiction: a convenient designator we use because of our shared interests and social conventions. It is conceivable, for instance, that someone who has never seen or heard of tables before will not see a table, just pieces of wood put together in seemingly random fashion. The idea that the table is ultimately real arises when we project our interests on to the world. How is any of this relevant to the question of universals? Buddhist philosophers argued that something similar goes on when we fall under the impression that all cows share a common cowness. We overlook the differences between individual cows because they satisfy some of our desires – for example, the desire for milk – that non-cows don’t. We then project our interests on to the world, mistakenly concluding that cowness is a real thing.

This may not seem like a very satisfactory response. It just pushes the problem back a step. How do all these particulars satisfy the same desire if they don’t share something in common? In this case, it seems like the cows really do share something: the ability to satisfy our desire for milk. Dharmakīrti (7th century) responded to this by using the example of fever-reducing herbs. He pointed out that there are many different herbs that reduce fevers. But it would be foolish to conclude from this that there exists a universal “fever-reducing-ness.” Each of these herbs is different, and they don’t reduce fevers in the same way, or use the same mechanisms to do so. We group them together under a single category only because of our subjective interest in reducing fevers. Dharmakīrti’s claim is that the same is true of everything. Each particular serves our interests in a manner that’s utterly distinct from everything else in the world. And so once again, there is no need to posit universals.

But there are still some lingering worries here. While we may accept that in the case of the herbs there is no universal fever-reducing-ness, does the same response work for simple substances such as elementary particles? Assuming for the sake of argument that an electron is an elementary particle, surely all electrons share something in common. Doesn’t the ability to bring about similar effects require a shared capacity – in this case, the same set of causal powers? One possible response to this line of argument, formulated by the philosopher Kamalaśīla (8th century), is to adopt what we would recognize as a Humean view of causation. Kamalaśīla rejected the notion of causal powers entirely, and like Hume, stated that there is nothing more to causation than constant conjunctions of events. Once again, talk of “causal powers” is just a convenient way of speaking about certain correlations that we never fail to observe.

This is obviously a very brief sketch of apoha nominalism. There is much more to say, particularly on the subtle differences between different versions of apoha defended by different Buddhist philosophers. This is a good place to start for further reading.

References

[1] From the translation in Apoha: Buddhist Nominalism and Human Cognition, edited by Mark Siderits, Tom Tillemans and Arindam Chakrabarti (2011).

Nyāya Substance Dualism

In an earlier post, I went over an argument for the existence of God that was formulated by philosophers in the Nyāya tradition. Here my aim is to provide a brief summary of some Nyāya arguments for substance dualism, the view that mental and physical substances are radically distinct.

The categories of substance and quality were fundamental to Nyāya metaphysics. A substance is the concrete substratum in which qualities inhere. An apple, for instance, is a substance, and redness is a quality that inheres in it. Substances can be complex and made up of parts (like an apple) or simple and indivisible (like an atom).

Nyāya held that in addition to physical substances, there are non-physical ones. Our individual soul – that is, our Self – is a non-physical substance. Like atoms, individual souls are simple and indivisible, and hence eternal (since destroying an object is the same as breaking it up into its constituent parts, and simple substances do not have any constituent parts). Consciousness, and different conscious states like desires and memories, are qualities that inhere in the substantial Self.

The primary philosophical adversaries of Nyāya belonged to two different camps. The first was Cārvāka, which claimed that only physical substances exist, that the mind does not exist apart from the body, and that the self is reducible to the totality of the body and all its functions. The other was Buddhism, which rejects physicalism but denies the existence of the substantial Self. Buddhism replaces the idea of the Self with a stream of momentary causally connected mental states. Nyāya was engaged in a protracted series of debates with both Cārvāka and Buddhism. Versions of the arguments I summarize in this essay were developed and defended by Nyāya thinkers such as Vātsyāyana (5th century), Uddyotakara (7th century) and Udayana (10th century), among others.

Against Physicalism

Nyāya came up with a number of arguments against physicalism. The one I focus on here has interesting similarities to arguments found in contemporary debates within the philosophy of mind. It can be stated like this¹:

(P1) All bodily qualities are either externally perceptible or imperceptible.

(P2) No phenomenal qualities are externally perceptible or imperceptible.

(C) Therefore, no phenomenal qualities are bodily qualities.

The argument is deductively valid, so let us examine the premises. As the term suggests, externally perceptible bodily qualities are features of the body that can be directly perceived by external agents. Color is an example of an externally perceptible quality. Everyone who can see me can see that the color of my body is brown. An imperceptible quality is a feature of the body that cannot be directly perceived, but can be inferred through observation and analysis. Weight was a common example used in Nyāya texts. You cannot directly perceive my weight, but if I stand on a weighing scale, you can know my weight by looking at the number displayed by the scale. P1 states that all physical qualities are exhausted by these two categories.

Let us movie on to P2. Phenomenal qualities are the features of conscious experience: the subjective, first person what-it-is-likeness to have an experience. The experience of color, pleasure, pain, desire, and memory are all examples of phenomenal qualities. P2 draws on the intuition that phenomenal qualities are essentially private.

To say that phenomenal qualities are not externally perceptible is to say that I cannot immediately know what it is like for you to have an experience. I have direct access to externally perceptible qualities of your body like color, but I don’t have direct access to your phenomenal qualities. I may be able to infer based on your behavior that you are in pain, but I don’t experience your pain in the immediate, first person manner that you do. The contemporary American philosopher Thomas Nagel made a similar point in his classic paper What Is it Like to Be a Bat? We may be able to observe how bats behave, and how their organs, brain and nervous system work, but we can’t know what it feels like, from the inside, to be a bat. Only a bat knows what it is like to be a bat.

If phenomenal qualities aren’t externally perceptible, perhaps they are imperceptible qualities like weight. But this is extremely implausible. Phenomenal qualities are not externally perceptible, but they’re clearly internally perceptible. The whole point is that I have direct perceptive access to phenomenal qualities – my conscious experiences are given to me in a basic and immediate fashion. Even if I don’t know that my experiences are veridical, I always know what the features of my own experience are. Thus, phenomenal qualities are not imperceptible.

Since phenomenal qualities are neither externally perceptible nor imperceptible, they are not physical qualities. If physicalism is the thesis that only physical substances and their qualities exist, and the above argument is sound, we must conclude that physicalism is false.

Against No-Self Theory

The above argument by itself does not get us to the kind of substance dualism that Nyāya favored. Buddhists, after all, are anti-physicalists, but they do not believe that the Self is an enduring substance that persists through time. Instead, Buddhists view a person as nothing more than a series of sequential causally connected momentary mental states. The 18th century Scottish philosopher David Hume, and more recently, the British philosopher Derek Parfit, came to roughly the same conclusion.

Again, the Nyāya canon has several arguments against the Buddhist no-Self theory, but I will touch on just two of them here. The first of these is that the Self is necessary to explain the first person experience of recollection or recognition. The intuition here is something like this: If I notice a tree and recognize that it is the same tree I saw a few days ago, there has to be a subject that was present both during the first experience and the second one for recollection to occur. Similarly, if the desire to eat a banana arises in my mind at t2 because I remember that I previously enjoyed eating a banana at t1, there has to be a subject that existed during the initial experience that occurred at t1, and persisted through time until the recollection at t2. Without the Self – a substance that endures through these different points in time – the experience of memory is a mystery.

The Buddhist response was that causal connections between momentary mental states could explain the phenomenon of memory. If the mental state at t1 is causally connected to the mental state at t2, that’s all that’s needed for the mental state at t2 to recall the experience at t1. The Nyāya rejoinder was that causal connections were not sufficient to account for how a mental event can be experienced as a memory. When I recognize a tree I saw few days ago, it isn’t just that an image of the previously perceived tree pops into my mind. Rather, my experience is of the form: “This tree that I see now is the same tree I saw yesterday.” In other words, my present experience after seeing the tree involves my recognition of the previous experience as belonging to myself. Similarly, my current desire to eat a banana is based on my recognition of the previous enjoyable experience of eating a banana as belonging to myself. One person does not experience the memory of another, and in much the same way, one mental state cannot remember the content of another. So a single entity that persists through time must exist.

The second argument for the Self takes for granted what we might call the unity of perception. Our perceptions aren’t a chaotic disjointed bundle despite the fact that they arise through different sense organs. There’s a certain unity and coherence to them. In particular, Nyāya philosophers drew attention to mental events that are characterized by cross-modal recognition. An example would be: “The table that I see now is the same table I am touching.” We have experiences that arise through different channels (in this case, my eye and my hand), but there must be something that ties these experiences together and synthesizes them to give rise to a unified cognitive event. In other words, the Buddhist no-Self theory might be able to explain the independent experiences of sight and touch, but for the object of both experiences to be recognized as one and the same, there must something else to which both experiences belong, and which integrates the experiences to give rise to the unified perception of the object. Again, it seems we must admit the existence of the Self.

Needless to say, all these arguments were (and remain) controversial. The debates between Buddhist and Nyāya philosophers got extremely complex over time. They involved increasingly fine-grained analyses of the phenomenology of recollection/recognition, and increasingly technical discussions on the metaphysics of causation. Similar debates took place between other orthodox Indian schools of thought that believed in the Self (Mīmāṃsā, Vedānta, etc.) and their Buddhist no-Self rivals. A good place to start for further reading on this subject would be the collection of essays in Hindu and Buddhist Ideas in Dialogue: Self and No-Self.

Notes

[1] The argument I’ve presented here is based on Kisor Kumar Chakrabarti’s formulation in Classical Indian Philosophy of Mind: The Nyāya Dualist Tradition.

A New Consequentialism

Consequentialism is a family of theories that takes the consequences of actions to be the location of the right-making or good-making features of those actions. For the sake of simplicity, let’s work with a very basic consequentialist view, which is that ought to maximize the good. The good is identified with happiness. So, we ought to maximize happiness with our actions.

The problem with this view is that it says the right thing to do, what we ought to do, is maximize happiness. However, intuitively, there are situations where maximizing happiness is not what we ought to do. For instance, nobody but the most committed act utilitarian would say that it’s ok to kill a homeless person to supply his organs to five needy recipients, even if nobody would ever find out.

So, this simple consequentialism fails to give a satisfying analysis of deontic concepts, like RIGHT and WRONG. In other words, it gives the wrong application conditions for RIGHT and WRONG, because it entails that certain actions which fall within the extension of WRONG actually fall within the extension of RIGHT.

What could we do to revise our simple consequentialism? Well, we could try not giving an analysis of deontic concepts. So, we could become scalar utilitarians, which is to say we could be people who think actions are ranked on a scale from best to worst. Maybe moral judgments that involve deontic concepts are just wrongheaded. We could just do without concepts like RIGHT and WRONG. Instead, let’s just talk about better or worse actions; actions which we have more or less reason to do.

This just isn’t satisfying, though. Clearly torturing children for fun isn’t just worse than not torturing them for fun, it’s wrong. We ought not to torture children for fun. There’s nothing wrongheaded about that moral judgment. So, we need to give an account of deontic concepts if we want a theory that captures what we do when we engage in moral discourse and deliberation.

Here is what I take to be the best way to deal with this problem. If we try to give a consequentialist analysis of deontic concepts, we get the extensions of those concepts wrong. If we try to avoid giving an analysis, then we exclude a large portion of our moral discourse from our theory. So, we should analyze deontic concepts as conventions based on contingent social arrangements. We still should employ deontic concepts in moral judgment, and they play an indispensable role in our moral lives. But they do not reflect some fundamental structure of the moral world; rather, they reflect contingent social arrangements.

The role that consequentialism can play in this theory is as a means by which we can critique these contingent social arrangements. So, we could give consequentialist critiques of the ways in which deontic concepts are deployed in specific classes of moral judgments. For instance, if the concept RIGHT once had within its extension returning escaped slaves to their so-called owners, then that deontic concept could be revised according to a consequentialist critique of the institution of slavery. Our deontic moral judgments, judgments of right and wrong, permissibility and impermissibility, are ultimately subject to a consequentialist evaluation if the need arises.

Is this just rule utilitarianism? I don’t think so. Typically, rule utilitarians think we ought to obey a certain idealized set of rules which pass the consequentialist test of goodness-maximization. What I’m proposing is that we work with the rules we already have, and revise as the need arises, rather than reason according to an idealized set of good-maximizing rules. Besides, a rule utilitarian analysis of deontic concepts will probably fall victim to the extension problem I raised above against our simple consequentialist analysis.

Check out Brian McElwee's paper on consequentialism for a similar account of non-deontic consequentialism that I based this post on.  

A Nyāya-Vaiśeṣika Argument for the Existence of God

Historical Context

The different philosophical traditions in classical Indian thought are usually categorized under the labels of orthodox and heterodox. The orthodox traditions accepted the scriptural authority of the Vedas, while the heterodox ones such as Buddhism and Jainism did not. Nyāya and Vaiśeṣika were initially two different orthodox schools. Nyāya was mostly concerned with logic, reasoning and epistemology. Vaiśeṣika focused on metaphysics and identifying the different kinds of substances that ultimately exist. By the eleventh century, these two traditions had merged into a single school, which came to be known simply as Nyāya-Vaiśeṣika (NV henceforth). Apart from a few academic philosophers, the NV tradition is basically extinct today. Historically, however, they were extremely influential and made a number of important philosophical contributions.

Of all the theistic systems in India, NV had the greatest confidence in the scope of natural theology. They came up with a number of arguments for the existence of Īśvara (“the Lord”), and were engaged in a series of polemical debates with other thinkers, their primary adversaries usually being Buddhists. I will go over their most well known argument for theism in this essay.

What the Argument is Not

Before I lay out the the argument, I want to make a few preliminary comments on what the argument is not, since this is often an issue of confusion.

The argument is not like the popular Kalām cosmological argument, which states that everything that begins to exist must have a cause, and that if you trace the chain of causes you eventually get to an uncaused cause that explains the beginning of the universe. Indeed, the NV system holds that a number of entities are eternal and uncreated. These include the atoms of different elements, time, space, universals, individual souls, and of course, God.

The argument also does not belong to the family of arguments from contingency, which conclude that there is a necessarily existent being that explains why anything at all exists. The NV thinkers were not committed to the view that everything that exists has an explanation for its existence. Finally, the argument is not like familiar teleological arguments that draw on observations of biological complexity to infer that an intelligent designer exists.

That said, the NV argument does bear some resemblance to all of the above arguments. It is therefore best understood as a hybrid cosmological-teleological argument. The argument points out that certain kinds of things require an intelligent creator that has the attributes traditionally assigned to God.

Overview of the Argument

The argument can be stated as follows¹:

(P1) Everything that is an effect has an intelligent maker.

(P2) The first product is an effect.

(C)  Therefore, the first product has an intelligent maker.

The argument as spelled out here is a little different from the way the NV philosophers usually framed it, primarily because they had a more elaborate way of laying out syllogisms. But that need not concern us. The important point is that the argument is valid – if the premises are true, the conclusion does follow. But what are we to make of the premises?

Some terminological clarification is in order before we can assess the premises. By an effect, defenders of the argument refer to a composite object – i.e., an object made of parts. Buildings, rocks, mountains, human bodies are all examples of effects. Recall that NV philosophers were atomists, and since atoms are indivisible and indestructible, they do not count as effects.

The first product refers to the simplest kind of effect that can be further broken down into atoms. In the NV system, dyads – imperceptible aggregates of two atoms – were seen as the first product. But again, that need not concern us. All we need to know is that the first product is the smallest unit that is itself further divisible. We can now move on to scrutinize the premises.

Support for the Premises

P2 is necessarily true, since the first product is defined as the simplest kind of effect. Things get interesting when we consider the first premise. P1 states that every effect has an intelligent maker, where an intelligent maker is defined as an agent who:

(i) Has knowledge of the components that make up the effect;

(ii) Desires to bring about the effect; and

(iii) Wills to do so.

The obvious question then is: why believe that every effect has an intelligent maker?

The support offered for P1 is inductive. NV philosophers defend P1 by pointing out that we have a very large number of examples that confirm it. The classic example is that of a pot. We observe that pots have an intelligent maker: the potter who is aware of the material out of which the pot is made (the clay), desires to make the pot, and wills to do so. Atoms are deliberately excluded from P1 since they aren’t effects, and hence cannot be seen as counterexamples. Given this, defenders the argument claim that the numerous confirming instances (as in the case of the pot/potter) entitle us to accept P1 as a general principle.

Responding to Objections

Philosophers in the NV tradition were aware that the argument was extremely controversial, and came up with a number of interesting responses to common objections. I will go over three of them here. 

Objection 1: Counterexamples to P1

The most common objection is that there are obvious counterexamples to the first premise. Rocks, mountains, plants – these are all made of parts, and yet, don’t have a maker. Thus, P1 is false.

The NV response is to say that this objection begs the question against the theist. The mere fact that we don’t immediately observe a maker in these cases does not establish that a maker was not at least in part involved. For the maker could, after all, be spatially or temporally remote² from the effect.

NV philosophers press the point by insisting that if direct observation of the cause was necessary, then even ordinary inferences would be defeated. For instance, we wouldn’t be able to infer the presence of fire from smoke if the fire wasn’t immediately observable. But of course, the fire could be a long distance away. Similarly, if we happen to come across a pot, we wouldn’t suspend judgement about whether it was made by a potter simply because we didn’t directly and immediately observe the pot being made by one. The potter could, after all, be in a different town, or even be dead. In other words, this objection proves too much, since it would render everyday inferences that we all rely on unjustified.

Objection 2: The Possibility of Counterexamples to P1

At this point, we might be willing to concede that we can’t rule out the existence of a maker for things like rocks and mountains. However, since the maker isn’t directly observed, the theist can’t be sure that a potential counterexample doesn’t exist either. It may be true that we have observed several instances of effects that have makers, but the possibility that there exists a counterexample means that P1 is at the very least unjustified, if not shown to be false.

Once again, the NV response is that the objection proves too much. The mere possibility of a counterexample is not reason enough to give up on the first premise. Consider, again, the example of smoke and fire. The mere possibility that there may have, at some time in the distant past, or in a faraway land, been an occurrence of smoke without fire does not give us enough reason to reject general fire-from-smoke type inferences. Unless we are willing to give up on induction entirely, there is no reason to reject P1.

The skeptic is also accused of another inconsistency at this point. Why does the skeptic not doubt that material things have material causes? If someone who is skeptical of P1 came across an object they had never seen before, they probably would not doubt that the object had been made out of pre-existing matter. And yet, the support for the belief that material objects have material causes is also inductive. The skeptic must provide some principled reason to reject P1 while also believing in material causes without the reason collapsing into the first objection which has already been refuted. Since the skeptic has not done this, they have failed to show that we must not accept P1.

Objection 3: The Gap

Many arguments for theism face what is sometimes called “the gap” problem. In other words, even if these arguments establish the existence of an intelligent maker, there is no reason to think this creator has any of the attributes traditionally assigned to God. A skeptic may point out that in all the cases of intelligent makers we have observed, the makers were embodied agents. The makers were not omniscient, uncreated or eternal. So there is no reason to suppose that the argument, even if successful, gets us to God. At best, it can establish the existence of some kind of intelligent maker, but any further claims about the omniscience or eternality of the maker would not be justified, since these properties are not observed in any of the cases we discussed.

Predictably, the NV response is that the criterion for inference being proposed as part of the objection is too strong, and would defeat many of our everyday inferences. In most inferences we make, we go beyond the general cases, and can justifiably infer special characteristics depending on the context. To go back to the commonly used fire-and-smoke example, if we observe smoke rising from a mountain, we don’t merely infer that there is fire. Rather, given the specific context, we infer that there is fire that has the property of being on the mountain. In other words, it isn’t fire-in-general that is inferred, it is fire-on-the-mountain. Similarly, based on the specific context, we can conclude that the maker of the first product has certain characteristics.

Since the maker exists prior to the first product, it must be uncreated. It cannot have a body, since bodies are made of parts, and this would simply introduce a regress that would have to be terminated by a creator that is not made of parts. Thus, the maker must be disembodied and simple. Since it is simple, it cannot be destroyed by being broken down into its constituent parts, and hence must be eternal. Since it has knowledge of all the fundamental entities and how to combine them, it must be omniscient. Finally, simplicity favors a single maker over multiple agents. The intelligent maker thus has many of the attributes of the God of traditional theism.

Conclusion

The argument, if successful, does get us to a God-like being. P1 is the controversial premise, and as we have seen, NV philosophers respond to objections by essentially shifting the burden of proof on to the skeptic. This can seem like trickery, and indeed, that’s how the influential 11th century Buddhist philosopher Ratnakīrti characterized it in his work Refutation of Arguments Establishing Īśvara, which is arguably the most thorough critique of the Nyāya-Vaiśeṣika argument. Either way, it is at least not obvious that the first premise can be easily rejected, so the skeptic must do some work to justify rejecting it. I may go over Ratnakīrti’s criticisms in a future essay.

References

[1] The argument as I’ve presented it here is roughly based on Kisor Kumar Chakrabarti’s formulation in Classical Indian Philosophy of Mind: The Nyāya Dualist Tradition.

[2] The terminology I’m using is based on Parimal Patil’s translation of the original Sanskrit terms in Against a Hindu God: Buddhist Philosophy of Religion in India.