The Spinozan Model of Belief-Fixation

A form of Cartesianism still pervades both philosophy and common sense. The idea that we can understand a proposition without believing it is almost a dogma in contemporary thought about belief-formation. Let’s call the view that we can understand a proposition without forming a belief about it the Cartesian Model of Belief-Fixation. In direct contrast, we have the Spinozan Model of Belief-Fixation, which says that when we understand a proposition, we automatically form a belief about it.

It just seems so obvious that I can understand the proposition that the Earth is flat without believing that the Earth is flat. The Cartesian Model captures at least a decent portion of our common sense conception of the belief-formation process. However, there is experimental evidence that tells against the Cartesian Model and counts in favor of the Spinozan Model.  I will provide some links to papers that explain the anti-Cartesian experimental evidence at length at the end of this post.

One form of experimental evidence against the Cartesian Model comes from the effects of cognitive load on belief-formation. The Spinozan Model takes believing and disbelieving to be outputs of different cognitive processes, so cognitive load should affect them differently, which is exactly what we see in the literature. The basic idea is that, for the Spinozan, believing a proposition is the output of an automatic, subpersonal cognitive system, whereas disbelieving a proposition requires cognitive effort on the part of the believer. So, cognitive load will affect disbelief in ways it cannot affect belief, since belief-formation is a subpersonal, automatic process.

The upshot of the Spinozan Model is that we cannot avoid believing propositions we understand. We cannot understand a proposition, suspend belief until we evaluate the evidence, and then form a belief about that proposition. The Cartesian Model captures this intuitively attractive picture of our doxastic processes very well. On the Cartesian Model, we can take the belief-formation process apart before beliefs form but after we understand a proposition. But on the Spinozan Model, we cannot detach understanding and belief.

What sorts of implications does the Spinozan Model have? Well, consider epistemology. We do not have the ability to evaluate the evidence for or reasons to believe a proposition prior to believing it, so the basing relation seems to be in trouble. We may be able to base our beliefs on our evidence in some cases, such as in perception, since the beliefs will be the automatic outputs of a cognitive system that is connected to our perceptual systems in a way that probably constitutes something resembling a basing relation between our perceptual experience and our beliefs about it. However, when we go higher-order, we seem to be able to evaluate our reasons for belief prior to forming beliefs, which is what the basing relation requires in this domain. But we cannot do this if the Spinozan Model is true. We automatically believe what we understand, so we do not necessarily base our beliefs about things on our available reasons or evidence. Another epistemic worry comes from constitutive norms of belief. If there are certain constitutive norms of belief that require things like believing for what seem to the believer to be good reasons, then the Spinozan Model runs roughshod over those norms.

Things aren’t completely bleak for the Spinozan epistemologist, though. We can still shed our beliefs through a process of doxastic deliberation. So, our beliefs can be sensitive to our available evidence or reasons, but only once we already form them and they come into contact with the rest of our web of beliefs. So, we can, through cognitive effort, disbelieve things. However, the process of disbelief be open to cognitive load effects, among other things. Cognitive load will be present in many parts of our day-to-day lives, just think of a time when you were slightly distracted by something while trying to accomplish a task. So the process of disbelieving something is not necessarily easy. But the ability to shed our beliefs opens the door to substantive epistemic theorizing within a Spinozan worldview. So all is not lost.

The Spinozan Model also has moral and political implications. For example, let’s consider a Millean Harm Principle for free speech: the speech of others should be restricted if and only if it is to prevent harm to others. The Harm Principle needs to be understood epistemically, so in terms of what people reasonably believe will prevent harm to others. So, if it is reasonable to believe that a person’s speech will harm somebody, then that person’s speech should be restricted. The question of who gets to restrict that person’s speech is a difficult one, but perhaps we can assume that it is the state, just if it is a legitimate authority. Now let’s unpack the kind of harm at play here. I won’t pretend to give a complete analysis of the sort of harm at play in this Harm Principle, but I can gesture at it with an example. People in the anti-vaccination movement spread, through their speech, various conspiracy theories and other forms of misinformation that leads people who would otherwise have vaccinated their children not to do so. The children sometimes contract diseases that would have been easily prevented with vaccines. Those diseases at least sometimes cause harm to those children. So, the speech of at least some anti-vaccination advocates leads, at least sometimes, to at least some children being harmed. I take this to be a paradigm case where it is a serious question whether we should restrict the speech of such advocates.

Now let’s bring in the Spinozan Model. If the Spinozan Model is true, then when anti-vaccination advocates post misinformation on Facebook (for example), people who read it will automatically believe it. Since those people understand those posts, they believe them. Now, such beliefs will persist in the mental systems of people who either avoid or are unaware of information that counters the anti-vaccination narrative. Some of those people will probably have children, and some of those people with children will probably not vaccinate them. The fact that it is so easy to cause other people to form beliefs with harmful downstream effects should give us pause. Perhaps, assuming that some form of the Harm Principle is true, there is a good case to be made that we should restrict certain people’s speech about certain topics. The case is only strengthened when we become Spinozans about belief-fixation.

Another thing that the Spinozan has something to say about is propaganda. If the Spinozan Model is true, then we are quite susceptible to propaganda. By inducing cognitive load effects, we become especially open to retaining beliefs based on propositions we understand. For example, news programs can induce cognitive load effects through things like news tickers at the bottom of the screen, constant news alert sounds, various graphics and effects moving around the screen, and other such things that occur while news is being read out to listeners and watchers. Those paying close attention to their screens become open to cognitive load effects, which makes disbelieving what we automatically believe especially difficult. So, we end up retaining a lot of the beliefs we form when watching the evening news. Whether this is a problem depends on the quality of the information being spread through the news outlet, but if that outlet is in the habit of putting out propaganda, then things are pretty bad.

There are surely other implications of the Spinozan Model of belief-fixation, but I’ll rest here. For those who find the model attractive, there are clearly tons of research topics ripe for the picking. For those who find the model unattractive, defending the Cartesian Model by trying to explain the experimental evidence within that framework is always an option.

Further reading:

How Mental Systems Believe

Thinking is Believing

You Can’t Not Believe Everything You Read


What I'm Currently Working On

I haven’t uploaded anything to this blog in a while so I figured I would post a brief overview of what I’ve been thinking about and working on. I should start regularly uploading normal blog posts soon.

My current research is almost entirely based on a theory of belief formation and its implications for epistemology, rationality, and Streumer’s argument that we can’t believe a global normative error theory.

The theory of belief formation that I’m working with is called the Spinozan theory. The theory is situated as an alternative to the Cartesian theory of belief formation. The Spinozan theory says that we automatically form a belief that p whenever we consider that p. This means that the process of belief formation is automatic and outside of our conscious control. This theory has serious implications for several areas, such as rationality and epistemology.

In terms of epistemology, lots of philosophers working in that area will talk about belief formation in ways that presuppose a Cartesian theory. The Cartesian theory says that the process of belief formation and the process of belief revision are on par; both are within our conscious control. When we form a belief we base it on considerations like evidence. We consider the evidence for and against the proposition and then we form a belief. However, if the Spinozan theory is true then this is a misrepresentation of how we actually form beliefs. According to the Spinozan, we automatically form a belief whenever we consider a proposition. We may be able to revise our beliefs with conscious effort, but that process requires more mental energy than the process of forming a belief. If the Spinozan is right, we need to investigate whether or not we can do without talk of control over belief formation in epistemology.

The Spinozan theory entails that we believe lots of contradictory things. That we believe lots of contradictory things runs contrary to our ordinary view of ourselves as relatively rational creatures who do their best not to hold inconsistent beliefs. If any plausible account of rationality requires at least a lot of consistency among our beliefs, then we’re pretty screwed. But we might be able to work with a revisionary account of rationality that sees being rational as a constant process of pruning the contradictory beliefs from one’s mind through counterevidence. The problem with that sort of account, though, is that belief revision is an effortful process that is sensitive to cognitive load effects, whereas belief formation is automatic will occur whenever one considers a proposition. So, we’ll basically be on a rationality treadmill, especially in our current society where we’re bombarded with things that induce cognitive load effects.

Another project that I’m going to start working on is applying the Spinozan theory to propaganda. I think that somebody interested in designing very effective propaganda should utilize the Spinozan theory. For example, knowing that belief formation is automatic and occurs whenever a person considers a proposition would help one design some pretty effective propaganda, since one’s beliefs can root themselves in their mental processes such that they influence one’s behavior over time. If you throw in some cognitive load enhancing effects then you can make it more difficult for people to resist keeping their newly formed beliefs.

The last project I’m currently working on is a paper in which I argue against Bart Streumer’s case against believing the error theory. According to Streumer, one cannot believe a global normative error theory because one would believe that one has no reason to believe it, which we can’t do according to him. I think that if we work with the Spinozan theory then this is clearly false, since we automatically form beliefs about things that we have no reason to believe. My guess is that proponents of Streumer’s view will push back by arguing that they are talking about something different than I am when they use the word, “belief”. But I think that the Spinozan theory tracks the non-negotiable features of our ordinary conceptions of belief enough to qualify as an account of belief in the ordinary sense.

For those interested in the Spinozan theory, click this link. I should be regularly uploading posts here soon.


Seemings Zombies

Let’s assume that seemings are sui generis propositional attitudes that have a truthlike feel. On this view, seemings are distinct mental states from beliefs and other propositional attitudes. It at least seems conceivable to me that there could be a being that has many of the same sorts of mental states that we have except for seemings. I’ll call this being a seemings zombie.

The seemings zombie never has mental states where a proposition is presented to it as true in the sense that it has a truthlike feel. Would such a being engage in philosophical theorizing if presented with the opportunity? I’m not entirely sure whether the seemings zombie would have the right sort of motivation to engage in philosophizing. If we need seemings or something similar to them to motivate philosophical theorizing, then seemings zombies won’t be motivated to do it.

But do we need seemings to motivate philosophizing? I think we might need them if philosophizing includes some sort of commitment to a particular view. What could motivate us to adopt a particular view in philosophy besides the fact that that view seems true to us? I guess we could be motivated by the wealth and fame that comes along with being a professional philosopher, but I’m skeptical.

Maybe we don’t need to adopt a particular view to philosophize. In that case we could say that seemings zombies can philosophize without anything seeming true to them. They could be curious about conceptual connections or entailments of theories articulated by the great thinkers, and that could be sufficient to move them to philosophize. I’m not sure whether or not this would qualify as philosophizing in the sense many of us are acquainted with. Even people whose careers consist of the study of a historical figure’s intellectual works seem to commit themselves to a particular view about that figure. Kant interpreters have views about what Kant thought or argued for, and my guess is those views seem true to those interpreters.

The seemings zombies might still be able to philosophize, though. Maybe they would end up as skeptics, looking down on all of us doing philosophy motivated by seemings. We seemings havers end up being motivated by mental states whose connection to the subject matter they are motivating us to take stances on are tenuous at best. The seemings zombies would then adopt skeptical attitudes towards our philosophical views. But I’m still worried, because skeptics like to give us arguments for their views about knowledge, and my guess is a lot of sincere skeptics are motivated by the fact that skepticism seems true to them. I could just be naive, though; there may be skeptics who remain uncommitted to any philosophical view, including their skepticism. I’m just not sure how that’s supposed to work.

One reaction you might have to all of this is to think that seemings zombies are incoherent or not even prima facie conceivable. That may be true, but it doesn’t seem that way to me.


 

Mental Incorrigibility and Higher Order Seemings

Suppose that the phenomenal view of seemings is true. So, for it to seem to S that P, S must have a propositional attitude towards P that comes with a truthlike feel. Now suppose that we are not infallible when it comes to our own mental states. We cannot be absolutely certain that we are in a certain mental state. So, we can make mistakes when we judge whether or not it seems to us that P.

Now put it all together. In cases where S judges that it seems to her that P, but she is mistaken, what is going on? Did it actually seem to her that P or did she mistakenly judge that it did? If it’s the former, then it is unclear to me how S could mistakenly judge that it seems to her that P. Seeming states on the phenomenal view seem to be the sorts of mental states we should be aware of when we experience them. If it's the latter, then it is unclear whether higher order seemings can solve our problem.

If a subject is experiencing a seeming state and judges that it seems to her that P, then there has to be some sort of luck going on that disconnects the seeming state from her judgment such that she does not know that it seems to her that P. Maybe she’s very distracted when she focuses her awareness onto her seeming state to form her judgment and that generates the discrepancy. I’m not really sure how plausible such a proposal would ultimately be. Instead, if the subject is not actually in a seeming state, then we need to explain what is going on when she mistakenly judges that she is in one. One possibility is that there are higher order seemings. Such seemings take first order seemings as their contents. On this view, it could seem to us that it seems that P is the case.

The idea of higher order seemings repulses me, but it could be true. Or, in a more reductionist spirit, we could say that higher order seemings are just a form of introspective awareness of our first order seemings. But I am worried that such a proposal would reintroduce the original problem linked to fallibility. If I can mistakenly judge that it seems to be that it seems to me that P, then what is going on with that higher order (introspective) seeming? The issue seems to come back to bite us in the ass. But it might do that on any proposal about higher order seemings, assuming we have accepted that we are not infallible mental state detectors. Maybe we just need to accept a regress of seemings, or maybe we should stop talking about them. Like always, I’ll just throw my hands up in the air and get distracted by a different issue rather than come up with a concrete solution.

Costs, Benefits, and the Value of Philosophical Goods

Philosophy is a very diverse field in which practitioners employ different dialectical and rhetorical techniques to advance their views and critique those of their opponents. But despite the heterogeneity, there seems to me to be a prevailing attitude towards the overarching method by which we choose philosophical theories among contemporary philosophers. We are all supposed to acknowledge that there are no knockdown arguments in philosophy. Philosophical theories are not the sorts of beasts that can be vanquished with a quick deductive argument, mostly because there is no set of rules that we can use to determine if a theory has been refuted. Any proposed set of rules will be open to challenge by those who doubt them, and there will be no obvious way to determine who wins that fight.

So, the process by which we compare and choose among philosophical theories cannot be guided by which theories can be refuted by knockdown arguments, but rather we seem to be engaged in a form of cost-benefit analysis when we do philosophy. We look at a theory and consider whether its overall value outweighs that of its competitors, and then we adopt that theory rather than the others. One way of spelling this process out is in terms of reflective equilibrium; we consider the parts of the theories we are comparing and the intuitions we have about the subject matter that the theories are about, and then we weigh those parts and intuitions against each other. Once we reach some sort of state of equilibrium among our intuitions and the parts that compose the theory we’re considering adopting, we can be justified in believing that theory.

Reflective equilibrium seems to be the metaphilosopher’s dream, since it avoids the problems that plague the knockdown argument approach to theory selection, and it makes room for some level of reasonable disagreement among practitioners, since not everybody has the same intuitions, and the intuitions shared among colleagues may vary in strength (in both intrapersonal and interpersonal senses). Unfortunately for me, I worry a lot about how reliable our methods are at getting us to the truth, and the process I crudely spelled out above does not strike me as satisfactory.

To be brief, my concern is that we have no clear way of determining the values of the things we are trading when we do a philosophical cost-benefit analysis. In other cases of cost-benefit analyses, it seems obvious to me that we can make satisfactory judgments in light of the values of the goods we’re trading. If I buy a candy bar at the store on my way to work, I can (at least on reflection) determine that certain considerations clearly count in favor of purchasing the candy bar and others clearly count against it. But when I weigh intuitions against parts of theories and parts of theories against each other, I begin to lose my grasp on what the exchange rate is. How do I know when to trade off an intuition that really impresses itself upon me for a theory with simpler parts? Exactly how intense must an intuition be before it becomes practically non-negotiable when doing a philosophical cost-benefit analysis? Questions like these throw me for a loop, especially when I’m in a metaphysical realist mood. Perhaps anti-realists will have an easier time coping with this, but those sorts of views never satisfied me, because there are parts of philosophy (like some areas in metaphysics) that never really struck me as open to a complete anti-realist analysis, so at least for me global anti-realism is off the table. At the moment, I’m completely puzzled.

An Introduction to Abhidharma Metaphysics

The Abhidharma school of Indian Buddhism represents one of the earliest attempts to form a complete, coherent philosophical system based on the teachings of the Buddha. Abhidharma metaphysics rests on mereological reductionism: the claim that wholes are reducible to their parts. On the Abhidharma view, a composite object like a table is nothing more than its parts arranged table-wise. The “table” is a convenient designator based on our shared interests and social conventions. Crucially, for Abhidharma Buddhists, this also extended to the self. The self, rather than being an enduring substance, is reducible to a bundle of momentary mental states (Carpenter, 2).

Based on this principle of reductionism, Abhidharma went on to develop the Doctrine of Two Truths. A statement is “conventionally true” is if it is based on our commonsense view of the world, and leads to successful practice in daily life. Thus, it is conventionally true that macro objects such as tables and chairs exist. A statement is “ultimately true” if it corresponds to the facts as they are, independent of any human conventions. According to the Abhidharma view, the only statements that can be considered ultimately true are statements about ontological simples: entities that cannot be further broken down into parts. The tendency to think that statements involving composite objects like tables are ultimately true arises when we project our interests and conventions on to the world.

The primary opponents of the Abhidharma Buddhists were philosophers of the Nyāya orthodox tradition, about which I have written before. Nyāya philosophers were unflinching commonsense realists. They held that wholes existed over and above their parts. The word “table” is not merely a convenient designator or a projection of our interests on to the world, it is a real object that cannot be reduced to its parts. Nyāya held that there are simple substances and composite substances. Simple substances are self-existent and eternal. Composite substances depend on simple substances for their existence, but cannot be reduced to them. They possess qualities that are numerically distinct from the qualities of their component parts.

There are some obvious difficulties with the view that wholes exist in addition to their parts, and Abhidharma philosophers were quick to point this out. If the table exists in addition to its parts, it would follow that whenever we look at a table, we are looking at two different entities – the component parts and the (whole) table. How can two different objects share the same location in space? Nyāya philosophers responded by stating that wholes are connected to parts by the relation of inherence. In Nyāya metaphysics, inherence is an ontological primitive, a category that cannot be further analyzed in terms of something else. To put it very crudely, inherence functioned as a kind of metaphysical glue in the Nyāya system. The inherence relation is what connects qualities to substances. The quality redness inheres in a red rose. Similarly, the inherence relation also connects wholes with their parts. In this case, the whole – the table – inheres in its component parts.

At this point in the debate, the standard Abhidharma move was to ask how exactly wholes are related to their parts. Do wholes inhere wholly or partially in their parts? If wholes are real and not reducible to their parts, but nonetheless inhere only partially in their parts, it would mean that there is a further ontological division at play. We now have three different kinds of entities. The parts of the table, the parts of the whole that inhere in the parts of the table, and the whole. Now, what is the relation between the whole and the parts of the whole that inhere in the parts of the table? Does the whole inhere wholly or partially in the second set of parts? If it is the former, then the second set of parts becomes redundant, for the whole could simply inhere wholly in the first set of parts (that is, the parts of the table). If the whole inheres partially in the second set of parts, then we will have to introduce yet another whole-part distinction, and there is an obvious infinite regress looming.

The Nyāya school held that wholes inhere wholly in their component parts. They drew an analogy with universals to make the illustration clear. Just as the universal cowness inheres in every individual cow, the table inheres wholly in every one of its individual parts. 

Abhidharma philosophers rasied a second set of difficulties for Nyāya. Consider a piece of cloth woven from different threads. According to the Nyāya view, the cloth is a substance that is not merely reducible to the threads. But now let us suppose I cannot see the whole cloth. Let us suppose most of the cloth is obscured from my view, and I only see a single thread. In this case, we would not say that I have seen the cloth. I am not even aware that there is a cloth – I think there is just a single thread. But if the Nyāya view is correct, then the cloth (the whole) inheres in every single thread, so when I see the thread, I should see the cloth as well. But since I don’t, it follows that the Nyāya view is incorrect.  

Now consider a piece cloth woven from both red and black threads. Since the cloth is a separate substance, and since composite substances possess qualities numerically distinct from their component parts, the cloth must have its own color. But is the color of the cloth red or black? Nyāya responded that the color of the cloth is neither red nor black, but a distinct “variegated” color (Siderits, 111). But this only multiplies difficulties. If the cloth is wholly present in its parts, and it possesses its own variegated color, why do I not see the variegated color when I look at its component parts? When I look at the red threads, all I see is red, and when I look at the black threads, all I see is black. I do not see the variegated color in the component parts and yet, if the whole inheres wholly in its parts, I should.

Finally, if the whole is a distinct substance over and above its parts, the weight of the whole must be greater than the sum of its parts. But we do not observe this when we weigh composite substances. This is highly mysterious on the Nyāya view. But these problems are all avoided if we simply accept that wholes are reducible to their parts.

Abhidharma is a broad tradition that encompasses numerous sub-schools. Two of the most prominent ones are Vaibhāṣika and Sautrāntika. While both sub-schools agree that everything is reducible to ontological simples, they disagree on the number and nature of these simples. The Vaibhāṣika school is fairly liberal in its postulation of simples, while Sautrāntika is conservative. Moreover, Vaibhāṣika treats simples as bearers of an intrinsic nature. According to Vaibhāṣika atomists, an earth atom, for instance, is a simple substance that possesses the intrinsic nature of solidity. The Sautrāntika school rejected the concept of “substance” entirely. There are numerous reasons for this (most of them epistemological, that I will cover in a subsequent essay), but roughly, it came down to this: We have no evidence of substances/bearers, only qualities. Further, there is no need to posit substances, because everything that needs to be explained can be explained without them. For Sautrāntika philosophers, an earth atom is not a substance that is the bearer of an intrinsic nature “solidity” – rather, it is simply a particular instance of solidity. Thus, in Sautrāntika metaphysics, there are no substances or inherence relations, there are simply quality-particulars. This position is similar to what contemporary metaphysicians call trope theory.

The term “reductionism” is often cause for confusion when used in relation to Abhidharma Buddhism. It must be emphasized that the kind of reductionism relevant here is mereological reductionism. Abhidharma Buddhists were not reductionists in the sense of believing that consciousness could be reduced to material states of the brain. All Abhidharma schools held that among the different kinds of ontological simples, some were irreducibly mental, as opposed to physical.   

Apart from mereological reductionism, the other key aspects of Abhidharma metaphysics are nominalism and atheism. I have covered the Buddhist approach to nominalism in a previous essay, so I will not go over it here. When it comes to atheism, it is important to recognize that Abhidharma Buddhists (like all Buddhists) were only atheistic in a narrow sense. They rejected the existence of an eternal, omnipotent creator of the universe. This did not mean that they were naturalists or that they rejected deities altogether. They believed in many gods, but these gods were not very different from human beings apart from being extraordinarily powerful. Venerating the gods was a means of obtaining temporary benefits in this life or a good rebirth, but the gods could offer no help with the ultimate goal of Buddhist practice: liberation from the cycle of birth and death. The gods themselves, being unenlightened beings, were stuck in the cycle of birth and death. To attain liberation one must seek refuge in the Buddha, the teacher of gods and men.

Works Cited:

Carpenter, Amber. Indian Buddhist Philosophy. Routledge, 2014. Print.  

Siderits, Mark. Buddhism as Philosophy: An Introduction. Ashgate, 2007. Print.

Why Veganism isn't Obligatory

I’ve written a bit about animal ethics on this blog, and most of it has been about animal rights. The sorts of rights that seem most plausible to ascribe to animals are negative rights, such as the right not to be unjustly harmed. If animals have rights, they probably have positive rights as well. For example, if you’re cruising around on your new boat with your dog, and you see that your dog fell overboard, it seems like your dog has the right to be rescued by you, assuming that you’re capable without endangering yourself or others. You’re obligated to rescue your dog, assuming that he has rights that can generate obligations for you. So, animals can have both positive and negative rights.

An interesting question that arises when we consider animal rights is if they generate obligations for us to become vegans. I take veganism to be a set of dietary habits that exclude almost all animal products. On my view, vegans can consume animal products in very specific situations. For example, if a vegan comes across a deer that has just died by being hit by a car, it is permissible for her to consume the deer and use its parts for whatever purposes she sees fit. However, circumstances like the dead deer are very rare and it’s doubtful that most vegans can survive off of those sorts of animal products, so most vegans will not consume any animal products. The sorts of vegans who hunt for the sorts of opportunities to consume animal products like in the case of roadkill are called, “freegans”. Other instances of vegan-friendly animal products are things found in the trash and things that have been stolen.

Most vegans would agree that purchasing chickens for your backyard and consuming the eggs they produce is impermissible. If they think animals have rights, then having backyard chickens might seem akin to owning slaves. In both instances, beings with rights are considered the property of people. So, owning chickens is a form of slavery according to this view. I want to challenge this view by using some arguments developed in a recent paper called, “In Defense of Backyard Chickens” by Bob Fischer and Josh Milburn.

Imagine that a person, call her Alice, studied chicken cognition and psychology such that she understood the best way to house chickens according to their needs. She builds the right sort of housing for chickens, she purchases high quality, nutritious feed for her chickens, and she makes sure they are safe from predators and the elements. Alice really cares about animal welfare, so her project is done in the interests of the chickens she plans to buy. She sees herself as giving the chickens a life they deserve in an environment best suited for their welfare. She then goes and buys some chickens and lets them loose in their new home. She tends to their needs and makes sure they’re comfortable. She then collects their eggs they lay and consumes them in various ways. I don’t think Alice done anything wrong, but some vegans may disagree.

To some vegans, it may seem like Alice has built slave quarters for her new egg-producing slaves. However, it seems to me that Alice has liberated the chickens in a way that’s analogous to an abolitionist buying the freedom of an enslaved human. If it’s permissible to buy the freedom of a slave by paying into an unjust institution like the slave-trade, then it seems like the same holds for buying the freedom of chickens. But, you may object, the chickens aren’t free! They’re still enclosed in Alice’s backyard, unable to leave. If you bought the freedom of a human and then put them in a backyard enclosure, we could hardly praise you as a liberator! Well, in the case of humans it’s wrong to force them into backyard enclosures. But that’s because the interests of humans are such that we make humans worse off by forcing them into enclosures in backyards. Humans aren’t the sorts of beings that need restrictions on their movement to guarantee their well-being. If anything, humans need free movement to have a high level of well-being. One of the reasons human slavery is so bad is because of the restriction on the freedom of movement of humans. Humans enjoy being able to go where they want; preventing that is to harm them.

When it comes to chickens, restricting their movement is actually in their interests. If we bought chickens and then just let them loose, they would probably die pretty quickly. Depending on where you release them and what time of the year it is, they could die of exposure or from predation. They could also walk into traffic and die, or they might end up starving because they won’t be able to find adequate nutrition. So, it seems like chicken interests don’t include complete freedom of movement, but rather some level of confinement for protection. Obviously not the level of confinement found in factory farms or even smaller commercial farms, but something that keeps predators and the elements out. So, the analogy between confining chickens and confining humans doesn’t hold, because it is in the interests of chickens and not humans to be confined to some extent.

One objection that might arise is that by buying chickens, Alice feeds into an unjust system that will only be perpetuated by your actions. Fair enough I guess, but it seems like the act of purchasing a few chickens is causally impotent with respect to furthering the unjust system of selling chickens for profit. If Alice didn’t buy those chickens, I doubt the store would have felt it, and the industry at large definitely wouldn’t feel it. The chickens probably would’ve been bought by somebody else, anyway, and they probably wouldn’t have been treated nearly as well as if Alice had bought them. But leaving that aside, this seems like a consequentialist objection. However, we’re in the land of the deontic with all of this rights talk, and it seems like chickens have a right to be rescued from their circumstances. So even if Alice somehow feeds into an unjust system by buying her chickens, that badness is outweighed or overridden by the right to rescue that those chickens have. If anything, Alice has an obligation to buy those chickens, given her ability to provide them with the lives to which they are entitled.

Another objection is that by purchasing chickens, Alice is treating them as property. Even if that’s true, it still seems better for the chickens that they are treated like property by Alice than by somebody less interested in their welfare. The chickens may have a right not to be owned, and perhaps Alice’s relationship to them is one of an owner, but it may still be in their interests to be owned by Alice. Their right not to be owned is outweighed by the potential harm they will experience if they’re bought by anybody else. Alice is their best bet. However, it is unclear that Alice is treating them as property. Another way of looking at this is Alice is buying the freedom of the chickens. They will no longer be the property of others. Instead, they get to live out their lives in the best conditions chickens can have. Now, you might respond by saying that living in Alice’s backyard isn’t true freedom because the chickens’ movement is restricted, but I already dealt with that objection above.

One last objection is that by obtaining and consuming eggs, Alice is illegitimately benefiting from something she’s allowed to do. This objection concedes that Alice can keep backyard chickens as long as she tends to their well-being sufficiently. But, the objection goes, Alice is illegitimately benefiting from her chickens. Perhaps the chickens also have a right to raise families, and by consuming their eggs Alice is depriving them of families. However, Alice could allow the chickens to procreate within limits. Obviously they cannot overpopulate the land they inhabit, because that would cause an overall decrease in well-being. In light of these considerations, Alice cannot allow every egg to result in a new chicken, so it seems like she can remove excess eggs from the chickens’ homes.

Maybe the chickens have property rights over their eggs. By taking the eggs, Alice is effectively stealing from her chickens. It isn’t clear to me that animals have property rights, but maybe they do. Even if the chickens own their eggs, it seems like Alice can collect some of them as a form of rent. There is, then, mutual benefit between Alice and the chickens. Alice gives the chickens a place to live and food, and in return Alice gets some of their eggs. The relationship between Alice and her chickens is closer to people renting a place to live and their landlord than it is to a thief and her victims, or squatters and a landowner.

Could the eggs be used for something more noble than as Alice’s food? Maybe, but it still seems permissible for Alice to eat the eggs. Sure, she could donate them or use them to feed other animals, but it seems like a stretch to say that Alice has an obligation not to consume the eggs and instead give them away. Even if it’s better that she gives them away, she’s still allowed to consume them. There are actions that are permissible even if they aren’t optimal, and Alice consuming the eggs seems to qualify.

If I’m right, and Alice is allowed to consume the eggs she collects, then Alice is not obligated to be a vegan. Eggs are animal products and pretty much every vegan would say that you shouldn’t eat them. So, it seems like veganism is not obligatory. Consuming animal products can sometimes be permissible if they’re obtained in the right way.

This post has been heavily influenced by a recent paper by Bob Fischer and Josh Milburn. Their paper articulated a lot of the thoughts I’ve had about veganism and moral obligations better than I could. Pretty much all of the arguments, objections, and responses draw from their paper. I wrote this post to summarize some of their arguments, and to draw attention to their paper. Bob Fischer is my favorite philosopher working on animal ethics. I recommend all of his stuff.

Check out their paper here.
Check out Bob Fischer’s work here.

Śrīharṣa’s Master Argument Against Difference

The Advaita Vedānta tradition is one of the most popular and influential Indian philosophical systems. The best translation of the Sanskrit word advaita is “non-dual.” The thesis of Advaita is that reality is at bottom non-dual, that is, devoid of multiplicity. Advaita recognizes that our everyday experience presents us with of a plurality of objects, but maintains that the belief that plurality and difference are fundamental features of the world is mistaken. The ultimate nature of reality is undifferentiated Being. Not being something, but Being itself – Pure Being. The phenomenal world, in which we experience Being as separate beings is not ultimately real. It is constructed by avidya – ignorance of the true nature of reality. We are beings alienated from Being, and true liberation lies in ending this alienation.

One of the reasons offered by Advaitins for accepting these claims is that they form the most plausible and coherent interpretation of the Upaniṣads – scriptures accepted as being a reliable source of knowledge. But this will hardly convince someone who does not already acknowledge the authority of the Upaniṣad. Here, the strategy of Advaita philosophers has typically been to go on the offensive and argue that the very notion of “difference” or “separateness” is in some sense conceptually incoherent. The arguments for this claim were first formally compiled by the 5th century philosopher Maṇḍana Miśra. Subsequent philosophers in the Advaita tradition further developed, defended and extended these arguments. In this essay, I will briefly go over the master argument against difference presented by the twelfth century philosopher Śrīharṣa in his magnum opus, Khaṇḍanakhaṇḍakhādya (“The Sweets of Refutation”).

Śrīharṣa begins his inquiry by asking what “difference” really is. He identifies four possible answers to this question:

  1. Difference is the intrinsic nature of objects.
  2. Difference consists in the presence of distinct properties in objects.
  3. Difference consists in the mutual non-existence of properties in objects.
  4. Difference is a special property of objects.

Śrīharṣa considers each option in turn, and finds them all untenable.

The claim that difference is the intrinsic nature of objects is rejected because difference is necessarily relational. To state that bare difference is the nature of X is to utter something meaningless. At best, we can say that difference-from-Y is the intrinsic nature of X. However, this raises another problem. To describe the intrinsic nature of X is to describe what X is in and of itself, independent of anything else.  In contrast, the very notion “difference-from-Y” indicates a dependence on Y. We have arrived at a contradiction: if X has an intrinsic nature that is parasitic on the nature of Y, then it follows that X doesn’t really have an intrinsic nature.

Śrīharṣa offers a subsidiary argument to drive home the implausibility of the view that that difference is the intrinsic nature of an object. Consider a blue object and a yellow object. An object that is blue by its very nature does not depend on the yellowness of the other object. Even if all the yellow objects in the world were to disappear, the blue object would still be blue. But this could not be the case if difference-from-yellow-objects was the intrinsic nature of the blue object.

According to the second definition of difference, X is different Y if distinct properties are present in X and Y. X and Y can be any two objects, but we may use Śrīharṣa’s example: A pot is different from a cloth because the property potness is present in the pot, while the property clothness is present in the cloth. But this raises an obvious question: what makes potness different from clothness? The answer cannot be (1) – that is, that difference is the very nature of potness and clothness – because that view has already been refuted. If we answered the question with (2), then we would end up saying that what makes potness different from clothness is that potness is itself possesses a property that clothness does not. We would have to maintain that potness-ness is present in potness, and clothness-ness is present in cloth-ness. Even if we ignore the oddness of properties being present in other properties, we can raise another question: What makes potness-ness different from clothness-ness? This series of questions could go on indefinitely, generating an infinite regress. Hence, this option is unsatisfactory. 

Śrīharṣa considers the possibility that difference consists in the mutual non-existence of properties in objects. According to this view, what makes a pot different from a cloth is the absence of potness in the cloth, and the absence of clothness in the pot. But much like before, this raises the question of what makes potness different from clothness. It cannot be (1) or (2), because they have already been refuted. If we bring up (3) here, we would have to say that what makes potness different from clothness is the absence of potness-ness in clothness, and the absence of clothness-ness in potness. At this point, much like before, we could ask what makes potness-ness different from clothness-ness. Once again, we are left with an infinite regress.

This brings us to the final option: that difference is a special property of an object. According to this view, difference-from-Y is itself an attribute of X. But if difference-from-Y is an attribute of X, then difference-from-Y is not X itself, but something different from X. This entitles us to ask what makes the attribute difference-from-Y different from X. It cannot be (1), (2) or (3), so it must be (4). This would mean that it must be another attribute that makes difference-from-Y different from X. But then this attribute itself would be different from both X and difference-from-Y, which simply raises the same question. One more, we see an infinite regress looming.

Having rejected all four possibilities, Śrīharṣa concludes that the very notion of difference is incoherent, and so it cannot be a true feature of the world. A typical reaction to Śrīharṣa’s arguments is that there must be something wrong with them – indeed, something obviously wrong with them. But it isn’t necessarily straightforward to identify what exactly it is. One could question whether Śrīharṣa really has considered all the possible options, whether some of these options really lead to an infinite regress, and finally, whether an infinite regress is something to be worried about. Philosophers from rival traditions adopted all these approaches. Śrīharṣa and his successors anticipated and responded to a number of these objections. They also modified and extended the arguments against difference to more specific cases, to show that differentiating cause and effect, moments in time, and subject and object, were all impossible. For a thorough examination of Śrīharṣa’s critique of difference, Phyllis Granoff’s Philosophy and Argument in Late Vedānta is a good place to start.  

Why Verificationism isn't Self-Refuting

In the early to mid Twentieth Century, there was a philosophical movement stemming from Austria that aimed to do away with metaphysics. The movement has come to be called Logical Positivism or Logical Empiricism, and it is widely seen as a discredited research program in philosophy (among other fields). One of the often repeated reasons that Logical Empiricism is untenable is that the criterion the positivists employed to demarcate the meaningful from the meaningless, when applied to itself, is meaningless, and therefore it refutes itself. In this post, I aim to show that the positivists’ criterion does not result in self-refutation.

Doing away with metaphysics is a rather ambiguous aim. One can take it to mean that we ought to rid universities of metaphysicians, encourage people to cease writing and publishing books and papers on the topic, and adjust our natural language such that it does not commit us to metaphysical claims. Another method of doing away with metaphysics is by discrediting it as an area of study. Logical Positivists saw the former interpretation of their aim as an eventual outgrowth of the latter interpretation. The positivists generally took their immediate goal to be discrediting metaphysics as a field of study, and probably hoped that the latter goal of removing metaphysics from the academy would follow.

Discrediting metaphysics can be a difficult task. The positivists’ strategy was to target the language used in expressing metaphysical theses. If the language that metaphysicians employed was only apparently meaningful, but underneath the surface it was cognitively meaningless, then the language of metaphysics would consist of meaningless utterances. Cognitive meaning consists of a statement being truth-apt, or having truth conditions. If a statement isn’t truth-apt, then it is cognitively meaningless, but it can serve other linguistic functions besides assertion (e.g. ordering somebody to do something isn’t truth-apt, but it has a linguistic function).

If metaphysics is a discourse that purports to be in the business of assertion, yet it consists entirely of cognitively meaningless statements, then it is a failure as a field of study. But how did the positivists aim to demonstrate that metaphysics is a cognitively meaningless enterprise? The answer is by providing a criterion to demarcate cognitively meaningful statements from cognitively meaningless statements.

The positivists were enamored with Hume’s fork, which is the distinction between relations of ideas and matters of fact, or, in Kant’s terminology, the analytic and the synthetic. The distinction was applied to all cognitively meaningful statements. So, for any cognitively meaningful statement, it is necessarily the case that it is either analytic or synthetic (but not both). The positivists took the criterion of analyticity to be a statement’s negation entailing a contradiction. Anything whose negation does not entail a contradiction would be synthetic. Analytic statements, for the positivists, were not about extra-linguistic reality, but instead were about concepts and definitions (and maybe rules). Any claim about extra-linguistic reality was synthetic, and any synthetic claim was about extra-linguistic reality.

Synthetic statements were taken to be cognitively meaningful just if they could be empirically confirmed. The only other cognitively meaningful statements for the positivists were analytic statements and contradictions. This is an informal statement of the verificationist criterion for meaningfulness. Verificationism was the way that the positivists discredited metaphysics as a cognitively meaningless discipline. If metaphysics consisted of synthetic statements that could not be empirically confirmed (e.g. the nature of possible worlds), then metaphysics consisted of cognitively meaningless statements. In short, the positivists took a non-cognitivist interpretation of the language used in metaphysics.    

Conventional wisdom says that verificationism, when applied to itself, results in self-refutation, which means that the positivists’ project is an utter failure. But why does it result in self-refutation? One reason is that it is either analytic or synthetic, but it doesn’t appear to be analytic, so it must be synthetic. But if the verificationist criterion is synthetic, then it must be empirically confirmable. Unfortunately, verificationism is not empirically confirmable, so it is cognitively meaningless. Verificationism, then, is in the same boat with metaphysics.

Fortunately for the positivists, the argument above fails. First off, there are ways to interpret verificationism such that it is subject to empirical confirmation. Verificationism could express a thesis that aims to capture or explicate the ordinary concept of meaning (Surovell 2013). If it aims to capture the ordinary concept of meaning, then it could be confirmed by studying how users of the concept MEANING could employ it in discourse. If such concept users employ the concept in the way the verificationist criterion says it does, then it is confirmed. So, given that understanding of verificationism, it is cognitively meaningful. If verificationism aims to explicate the ordinary concept of meaning, then it would be allowed more leeway when it deviates from standard usage of ordinary concept in light of its advantages within a comprehensive theory (Surovell 2013). Verificationism construed as an explication of the ordinary concept of meaning, then, would be subject to empirical confirmation if the overall theory it contributes to is confirmed.

Secondly, if one takes the position traditionally attributed to Carnap, then one can say that the verificationist criterion is not internal to a language, but external. It is a recommendation to use language in a particular way that admits of only empirically confirmable, analytic, and contradictory statements. Recommendations are not truth-apt, yet they serve important linguistic functions. So, verificationism may be construed non-cognitively, as a recommendation motivated by pragmatic reasons. There’s nothing self-refuting about that.  

Lastly, one could take verificationism to be internal to a language, in Carnap’s sense, and analytic. However, the criterion would not aim to capture the ordinary notion of meaning, but instead it would be a replacement of that notion. Carnap appears to endorse this way of construing verificationism in the following passage,

“It would be advisable to avoid the terms ‘meaningful’ and ‘meaningless’ in this and in similar discussions . . . and to replace them with an expression of the form “a . . . sentence of L”; expressions of this form will then refer to a specified language and will contain at the place ‘. . .’ an adjective which indicates the methodological character of the sentence, e.g. whether or not that sentence (and its negation) is verifiable or completely or incompletely confirmable or completely or incompletely testable and the like, according to what is intended by ‘meaningful’” (Carnap 1936).

Rather than documenting the way ordinary users of language deploy the concept MEANING, Carnap appears to be proposing a replacement for the ordinary concept of meaning. The statement of verificationism is internal to the language in which expressions of meaning are replaced with “a . . . sentence of L” where ‘. . .’ is an adjective that indicates whether or not the sentence is verifiable, and thus is analytic in that language. The motivation for adopting verificationism thus construed would then be dependent on the theoretical and pragmatic advantages of using that language.

So, verificationism can be construed as synthetic, analytic, or cognitively meaningless. It could be considered a recommendation to use language in a certain way, and that recommendation is then motivated by pragmatic reasons (or other reasons), which makes it cognitively meaningless but linguistically useful, which does not result in self-refutation. Or, it could be considered a conventional definition aimed to capture or explicate the ordinary concept of meaning. It would then be verifiable because it could be confirmed by an empirical investigation into the way people use the ordinary notion of meaning, or by its overall theoretical merits. Lastly, it could be internal to a language, and thus analytic, but not an attempt at capturing the ordinary notion of meaning. Instead, it would be a replacement that served a particular function within a particular language that is itself chosen for pragmatic (non-cognitive) reasons. In any of these construals, verificationism is not self-refuting.

Works Cited:

Carnap, Rudolf. "Testability and Meaning - Continued." Philosophy of Science. 1936. Web.

Surovell, Jonathan. "Carnap’s Response to the Charge that Verificationism is Self-Undermining." 2013. Web.

 

A Problem for the New Consequentialism

In a previous post, I outlined a non-deontic form of consequentialism that was supposed to avoid what I called the extension problem. The extension problem plagues deontic consequentialism, which is the view that the rightness, wrongness, permissibility, and impermissibility of actions are determined by their consequences. So, a simple hedonistic act utilitarian will say that there is one categorically binding duty, and that is to maximize pleasure when we act. But such a view suffers from intuitively compelling counterexamples. So it seems like hedonistic act utilitarianism gets the extension of our deontic concepts wrong.

Non-deontic consequentialism is designed to avoid the extension problem, because it defers how those concepts are applied by a society at a given time. By doing so, the theory allows for the extensions of our deontic concepts to pick out what our society takes them to be, which seems to preserve our intuitions about particular cases, like the drifter being killed by a surgeon for his organs. Hedonistic act utilitarianism requires that, if the surgeon is in the epistemic situation where he can rule out negative consequences, and he knows that he can use these organs to save five patients, then he is duty-bound to kill the drifter and harvest the organs. Non-deontic consequentialism avoids this because your typical person who is not a thoroughly committed act utilitarian would not agree that the extension of DUTY covers the surgeon’s organ harvesting endeavor.

An alternative that avoids the extension problem is scalar utilitarianism, which does without deontic concepts like RIGHT and WRONG. Instead, we judge actions as better or worse than available alternatives. The problem with this view is that it just seems obvious that it is wrong to torture puppies for fun. But a scalar utilitarian cannot give an adequate account of what makes that act wrong, so she must explain why it seems so obvious to say that it is wrong to torture puppies, even though it’s false.

Setting aside both of these forms of consequentialism, I want to discuss the non-deontic consequentialism I outlined in my other post. On the view I described, the rightness and wrongness, along with other deontic properties, of actions are a function of the social conventions that obtain at a given time in a given society. The consequentialism comes in at the level of critiquing and improving those social conventions.

Moral progress occurs when we adopt social conventions that are better by consequentialist standards. So, for instance, it used to be a social convention in the United States that we could have property rights over other human beings, and transfer those rights for currency. Those conventions are no longer in place in the United States, and at the time they were, they could have been critiqued by consequentialist standards. Those conventions were not better than available alternatives at the time, so it would have been better not to have the institution of chattel slavery. But these facts about betterness do not determine what is right or wrong. Rather, they should guide efforts to improve social conventions, and thereby change the extensions of our deontic concepts.

This seems all well and good, but I am a bit worried. This view entails that social conventions have normative force, no matter what. So, just because something is a social convention, we thereby have at least some moral reason to abide by it. Take slavery again; such an institution was once enshrined in many social conventions. Does it follow that at the time, everybody had at least some moral reason to abide by the conventions that said we ought to return escaped slaves to their so-called owners? It seems to me that slavery is and always was wrong. There was never a time at which it was right to own another human being. I think that the basis of my concern is that deontic judgments, especially when applied to important things like slavery, are not indexed to times and places. Just because a human being is sold in a marketplace in 1790 Virginia does not change the deontic status of the situation. What exactly is the morally relevant difference between that time period and today? Why is it wrong now to sell another human being but it was not in 1790s Virginia?

One potential response to my worries is to point out that I’m making these judgments from a particular time period when the extension of our deontic concepts rules out slavery being permissible. So, perhaps I find the entailment of this theory appalling because my intuitions are shaped by the extension of the deontic concepts I use. Since 1790s Virginia, we have undergone moral progress, and now it is wrong to own slaves because of the shift in social conventions. It could even be that according to our deontic concepts’ extensions now, it was wrong in the 1790s to buy and sell slaves.

I think these considerations certainly make my concerns less worrisome. But I’m experiencing a residual anxiety. It still seems counterintuitive to say that, if we had grown up in 1790s Virginia, our claims about the rightness and wrongness would be flipped. We would have an inverted moral spectrum when it comes to deontic judgments about slavery. That is what I find counterintuitive. The theory was developed to explicitly address the extension problem, which was that deontic consequentialists seem to get the extensions of our deontic concepts wrong. The reason I think that they get those extensions wrong is because their theories entail counterintuitive results. They end up having to bite a lot of bullets, such as the organ harvesting surgeon. But if non-deontic consequentialism also generates counterintuitive entailments, like slavery being permissible in 1790s Virginia for people at that time, then is it any better than its deontic consequentialist competitors?




 

Buddhist Apoha Nominalism

The Problem of Universals is one the oldest subjects of debate in Indian philosophy. Realists about universals believe that universals exist in addition to concrete particulars, while nominalists deny the existence of universals. The Nyāya and Mīmāṃsā schools were vocal defenders of realism. Nyāya philosophers believed in universals for a number of reasons:

  • Universals explain how different objects share common characteristics. Cow A and Cow B differ from each other in various ways, and yet we recognize that they’re both cows. The Nyāya explanation for this is that what Cow A and Cow B have in common is the universal “cowness” that inheres in both.
  • Universals fix the meanings of words. The word “cow” doesn’t just refer to a particular cow, but cows in general. How can a word refer to many different objects at once? The Nyāya solution is that the word “cow” refers to a particular qualified by the universal cowness, which is present in all individual cows.
  • Universals are a solution to the Problem of Induction, first raised by the Cārvāka empiricists. Nyāya philosophers viewed the laws of nature as relations between universals. Our knowledge of these universals and the relations between them justifies inductive generalizations, and consequently, inferences such as the presence of fire from the presence of smoke.

Buddhists were the best-known nominalists in the Indian philosophical tradition. The Buddhist hostility towards universals is perhaps best expressed by Paṇḍita Aśoka (9th century): “One can clearly see five fingers in one’s own hand. One who commits himself to a sixth general entity fingerhood, side by side with the five fingers, might as well postulate horns on top of his head.”¹

In this post, I will briefly go over how Buddhists responded to the first two reasons for believing in universals provided by the Nyāya school. The Buddhist defense of induction will have to be the subject of a separate essay.

The form of nominalism Buddhists advocated is called apoha, the Sanskrit word for “exclusion.” The first precise statement of apoha nominalism can be found in the works of Dignāga (6th century). Dignāga claimed that the word “cow” simply means “not non-cow.” Since there is obviously no universal “not cow-ness” present in every object that is not a cow, this semantic view doesn’t commit us to the existence of universals. Every cow is a unique particular distinct from all other objects. We simply overlook the mutual differences between cows and group them together based on how they’re different from non-cows.  Thus, it’s not because cows share something in common that we call them by the same name. Rather, we think all cows share something in common because we have learned to call them by the same name.

There are some objections that immediately spring to mind, and Nyāya and Mīmāṃsā philosophers brought them up repeatedly in their criticisms of apoha nominalism. First, how does saying that “cow” means “not non-cow” provide a solution to the problem of universals? “Not non-cow” involves a double negation, so to say “cow” means “not non-cow” is just to say “cow” means “cow.” This leads us right back to where we started, and just as before, it seems that we need to posit a universal cowness. Second, how can we focus on cows’ common differences from non-cows unless we already know how to tell what a cow is in the first place? Once again, we seem to have gone in a circle, and apoha seems to presuppose precisely what it was supposed to explain.

Dignāga’s successors responded to the first objection by drawing distinctions between different kinds of negation. Consider the statement: “This is not impolite.” Now, at first glance it might seem like this just translates to “This is polite,” because of the double negation involved in “not impolite.” But this is not necessarily the case. The statement could be about something to which the very category of politeness does not apply, in which case “not impolite” is distinct from “polite.” Thus, “not non-cow” can mean something genuinely different from “cow.”

To understand how Buddhists responded to the second charge of circularity, it helps to look at another Buddhist view. Buddhists were mereological reductionists: they did not believe that wholes were anything over and above their parts. So, a table, for instance, is nothing more than its parts arranged table-wise. The “table” is just a conceptual fiction: a convenient designator we use because of our shared interests and social conventions. It is conceivable, for instance, that someone who has never seen or heard of tables before will not see a table, just pieces of wood put together in seemingly random fashion. The idea that the table is ultimately real arises when we project our interests on to the world. How is any of this relevant to the question of universals? Buddhist philosophers argued that something similar goes on when we fall under the impression that all cows share a common cowness. We overlook the differences between individual cows because they satisfy some of our desires – for example, the desire for milk – that non-cows don’t. We then project our interests on to the world, mistakenly concluding that cowness is a real thing.

This may not seem like a very satisfactory response. It just pushes the problem back a step. How do all these particulars satisfy the same desire if they don’t share something in common? In this case, it seems like the cows really do share something: the ability to satisfy our desire for milk. Dharmakīrti (7th century) responded to this by using the example of fever-reducing herbs. He pointed out that there are many different herbs that reduce fevers. But it would be foolish to conclude from this that there exists a universal “fever-reducing-ness.” Each of these herbs is different, and they don’t reduce fevers in the same way, or use the same mechanisms to do so. We group them together under a single category only because of our subjective interest in reducing fevers. Dharmakīrti’s claim is that the same is true of everything. Each particular serves our interests in a manner that’s utterly distinct from everything else in the world. And so once again, there is no need to posit universals.

But there are still some lingering worries here. While we may accept that in the case of the herbs there is no universal fever-reducing-ness, does the same response work for simple substances such as elementary particles? Assuming for the sake of argument that an electron is an elementary particle, surely all electrons share something in common. Doesn’t the ability to bring about similar effects require a shared capacity – in this case, the same set of causal powers? One possible response to this line of argument, formulated by the philosopher Kamalaśīla (8th century), is to adopt what we would recognize as a Humean view of causation. Kamalaśīla rejected the notion of causal powers entirely, and like Hume, stated that there is nothing more to causation than constant conjunctions of events. Once again, talk of “causal powers” is just a convenient way of speaking about certain correlations that we never fail to observe.

This is obviously a very brief sketch of apoha nominalism. There is much more to say, particularly on the subtle differences between different versions of apoha defended by different Buddhist philosophers. This is a good place to start for further reading.

References

[1] From the translation in Apoha: Buddhist Nominalism and Human Cognition, edited by Mark Siderits, Tom Tillemans and Arindam Chakrabarti (2011).

Nyāya Substance Dualism

In an earlier post, I went over an argument for the existence of God that was formulated by philosophers in the Nyāya tradition. Here my aim is to provide a brief summary of some Nyāya arguments for substance dualism, the view that mental and physical substances are radically distinct.

The categories of substance and quality were fundamental to Nyāya metaphysics. A substance is the concrete substratum in which qualities inhere. An apple, for instance, is a substance, and redness is a quality that inheres in it. Substances can be complex and made up of parts (like an apple) or simple and indivisible (like an atom).

Nyāya held that in addition to physical substances, there are non-physical ones. Our individual soul – that is, our Self – is a non-physical substance. Like atoms, individual souls are simple and indivisible, and hence eternal (since destroying an object is the same as breaking it up into its constituent parts, and simple substances do not have any constituent parts). Consciousness, and different conscious states like desires and memories, are qualities that inhere in the substantial Self.

The primary philosophical adversaries of Nyāya belonged to two different camps. The first was Cārvāka, which claimed that only physical substances exist, that the mind does not exist apart from the body, and that the self is reducible to the totality of the body and all its functions. The other was Buddhism, which rejects physicalism but denies the existence of the substantial Self. Buddhism replaces the idea of the Self with a stream of momentary causally connected mental states. Nyāya was engaged in a protracted series of debates with both Cārvāka and Buddhism. Versions of the arguments I summarize in this essay were developed and defended by Nyāya thinkers such as Vātsyāyana (5th century), Uddyotakara (7th century) and Udayana (10th century), among others.

Against Physicalism

Nyāya came up with a number of arguments against physicalism. The one I focus on here has interesting similarities to arguments found in contemporary debates within the philosophy of mind. It can be stated like this¹:

(P1) All bodily qualities are either externally perceptible or imperceptible.

(P2) No phenomenal qualities are externally perceptible or imperceptible.

(C) Therefore, no phenomenal qualities are bodily qualities.

The argument is deductively valid, so let us examine the premises. As the term suggests, externally perceptible bodily qualities are features of the body that can be directly perceived by external agents. Color is an example of an externally perceptible quality. Everyone who can see me can see that the color of my body is brown. An imperceptible quality is a feature of the body that cannot be directly perceived, but can be inferred through observation and analysis. Weight was a common example used in Nyāya texts. You cannot directly perceive my weight, but if I stand on a weighing scale, you can know my weight by looking at the number displayed by the scale. P1 states that all physical qualities are exhausted by these two categories.

Let us movie on to P2. Phenomenal qualities are the features of conscious experience: the subjective, first person what-it-is-likeness to have an experience. The experience of color, pleasure, pain, desire, and memory are all examples of phenomenal qualities. P2 draws on the intuition that phenomenal qualities are essentially private.

To say that phenomenal qualities are not externally perceptible is to say that I cannot immediately know what it is like for you to have an experience. I have direct access to externally perceptible qualities of your body like color, but I don’t have direct access to your phenomenal qualities. I may be able to infer based on your behavior that you are in pain, but I don’t experience your pain in the immediate, first person manner that you do. The contemporary American philosopher Thomas Nagel made a similar point in his classic paper What Is it Like to Be a Bat? We may be able to observe how bats behave, and how their organs, brain and nervous system work, but we can’t know what it feels like, from the inside, to be a bat. Only a bat knows what it is like to be a bat.

If phenomenal qualities aren’t externally perceptible, perhaps they are imperceptible qualities like weight. But this is extremely implausible. Phenomenal qualities are not externally perceptible, but they’re clearly internally perceptible. The whole point is that I have direct perceptive access to phenomenal qualities – my conscious experiences are given to me in a basic and immediate fashion. Even if I don’t know that my experiences are veridical, I always know what the features of my own experience are. Thus, phenomenal qualities are not imperceptible.

Since phenomenal qualities are neither externally perceptible nor imperceptible, they are not physical qualities. If physicalism is the thesis that only physical substances and their qualities exist, and the above argument is sound, we must conclude that physicalism is false.

Against No-Self Theory

The above argument by itself does not get us to the kind of substance dualism that Nyāya favored. Buddhists, after all, are anti-physicalists, but they do not believe that the Self is an enduring substance that persists through time. Instead, Buddhists view a person as nothing more than a series of sequential causally connected momentary mental states. The 18th century Scottish philosopher David Hume, and more recently, the British philosopher Derek Parfit, came to roughly the same conclusion.

Again, the Nyāya canon has several arguments against the Buddhist no-Self theory, but I will touch on just two of them here. The first of these is that the Self is necessary to explain the first person experience of recollection or recognition. The intuition here is something like this: If I notice a tree and recognize that it is the same tree I saw a few days ago, there has to be a subject that was present both during the first experience and the second one for recollection to occur. Similarly, if the desire to eat a banana arises in my mind at t2 because I remember that I previously enjoyed eating a banana at t1, there has to be a subject that existed during the initial experience that occurred at t1, and persisted through time until the recollection at t2. Without the Self – a substance that endures through these different points in time – the experience of memory is a mystery.

The Buddhist response was that causal connections between momentary mental states could explain the phenomenon of memory. If the mental state at t1 is causally connected to the mental state at t2, that’s all that’s needed for the mental state at t2 to recall the experience at t1. The Nyāya rejoinder was that causal connections were not sufficient to account for how a mental event can be experienced as a memory. When I recognize a tree I saw few days ago, it isn’t just that an image of the previously perceived tree pops into my mind. Rather, my experience is of the form: “This tree that I see now is the same tree I saw yesterday.” In other words, my present experience after seeing the tree involves my recognition of the previous experience as belonging to myself. Similarly, my current desire to eat a banana is based on my recognition of the previous enjoyable experience of eating a banana as belonging to myself. One person does not experience the memory of another, and in much the same way, one mental state cannot remember the content of another. So a single entity that persists through time must exist.

The second argument for the Self takes for granted what we might call the unity of perception. Our perceptions aren’t a chaotic disjointed bundle despite the fact that they arise through different sense organs. There’s a certain unity and coherence to them. In particular, Nyāya philosophers drew attention to mental events that are characterized by cross-modal recognition. An example would be: “The table that I see now is the same table I am touching.” We have experiences that arise through different channels (in this case, my eye and my hand), but there must be something that ties these experiences together and synthesizes them to give rise to a unified cognitive event. In other words, the Buddhist no-Self theory might be able to explain the independent experiences of sight and touch, but for the object of both experiences to be recognized as one and the same, there must something else to which both experiences belong, and which integrates the experiences to give rise to the unified perception of the object. Again, it seems we must admit the existence of the Self.

Needless to say, all these arguments were (and remain) controversial. The debates between Buddhist and Nyāya philosophers got extremely complex over time. They involved increasingly fine-grained analyses of the phenomenology of recollection/recognition, and increasingly technical discussions on the metaphysics of causation. Similar debates took place between other orthodox Indian schools of thought that believed in the Self (Mīmāṃsā, Vedānta, etc.) and their Buddhist no-Self rivals. A good place to start for further reading on this subject would be the collection of essays in Hindu and Buddhist Ideas in Dialogue: Self and No-Self.

Notes

[1] The argument I’ve presented here is based on Kisor Kumar Chakrabarti’s formulation in Classical Indian Philosophy of Mind: The Nyāya Dualist Tradition.

A New Consequentialism

Consequentialism is a family of theories that takes the consequences of actions to be the location of the right-making or good-making features of those actions. For the sake of simplicity, let’s work with a very basic consequentialist view, which is that ought to maximize the good. The good is identified with happiness. So, we ought to maximize happiness with our actions.

The problem with this view is that it says the right thing to do, what we ought to do, is maximize happiness. However, intuitively, there are situations where maximizing happiness is not what we ought to do. For instance, nobody but the most committed act utilitarian would say that it’s ok to kill a homeless person to supply his organs to five needy recipients, even if nobody would ever find out.

So, this simple consequentialism fails to give a satisfying analysis of deontic concepts, like RIGHT and WRONG. In other words, it gives the wrong application conditions for RIGHT and WRONG, because it entails that certain actions which fall within the extension of WRONG actually fall within the extension of RIGHT.

What could we do to revise our simple consequentialism? Well, we could try not giving an analysis of deontic concepts. So, we could become scalar utilitarians, which is to say we could be people who think actions are ranked on a scale from best to worst. Maybe moral judgments that involve deontic concepts are just wrongheaded. We could just do without concepts like RIGHT and WRONG. Instead, let’s just talk about better or worse actions; actions which we have more or less reason to do.

This just isn’t satisfying, though. Clearly torturing children for fun isn’t just worse than not torturing them for fun, it’s wrong. We ought not to torture children for fun. There’s nothing wrongheaded about that moral judgment. So, we need to give an account of deontic concepts if we want a theory that captures what we do when we engage in moral discourse and deliberation.

Here is what I take to be the best way to deal with this problem. If we try to give a consequentialist analysis of deontic concepts, we get the extensions of those concepts wrong. If we try to avoid giving an analysis, then we exclude a large portion of our moral discourse from our theory. So, we should analyze deontic concepts as conventions based on contingent social arrangements. We still should employ deontic concepts in moral judgment, and they play an indispensable role in our moral lives. But they do not reflect some fundamental structure of the moral world; rather, they reflect contingent social arrangements.

The role that consequentialism can play in this theory is as a means by which we can critique these contingent social arrangements. So, we could give consequentialist critiques of the ways in which deontic concepts are deployed in specific classes of moral judgments. For instance, if the concept RIGHT once had within its extension returning escaped slaves to their so-called owners, then that deontic concept could be revised according to a consequentialist critique of the institution of slavery. Our deontic moral judgments, judgments of right and wrong, permissibility and impermissibility, are ultimately subject to a consequentialist evaluation if the need arises.

Is this just rule utilitarianism? I don’t think so. Typically, rule utilitarians think we ought to obey a certain idealized set of rules which pass the consequentialist test of goodness-maximization. What I’m proposing is that we work with the rules we already have, and revise as the need arises, rather than reason according to an idealized set of good-maximizing rules. Besides, a rule utilitarian analysis of deontic concepts will probably fall victim to the extension problem I raised above against our simple consequentialist analysis.

Check out Brian McElwee's paper on consequentialism for a similar account of non-deontic consequentialism that I based this post on.  

A Nyāya-Vaiśeṣika Argument for the Existence of God

Historical Context

The different philosophical traditions in classical Indian thought are usually categorized under the labels of orthodox and heterodox. The orthodox traditions accepted the scriptural authority of the Vedas, while the heterodox ones such as Buddhism and Jainism did not. Nyāya and Vaiśeṣika were initially two different orthodox schools. Nyāya was mostly concerned with logic, reasoning and epistemology. Vaiśeṣika focused on metaphysics and identifying the different kinds of substances that ultimately exist. By the eleventh century, these two traditions had merged into a single school, which came to be known simply as Nyāya-Vaiśeṣika (NV henceforth). Apart from a few academic philosophers, the NV tradition is basically extinct today. Historically, however, they were extremely influential and made a number of important philosophical contributions.

Of all the theistic systems in India, NV had the greatest confidence in the scope of natural theology. They came up with a number of arguments for the existence of Īśvara (“the Lord”), and were engaged in a series of polemical debates with other thinkers, their primary adversaries usually being Buddhists. I will go over their most well known argument for theism in this essay.

What the Argument is Not

Before I lay out the the argument, I want to make a few preliminary comments on what the argument is not, since this is often an issue of confusion.

The argument is not like the popular Kalām cosmological argument, which states that everything that begins to exist must have a cause, and that if you trace the chain of causes you eventually get to an uncaused cause that explains the beginning of the universe. Indeed, the NV system holds that a number of entities are eternal and uncreated. These include the atoms of different elements, time, space, universals, individual souls, and of course, God.

The argument also does not belong to the family of arguments from contingency, which conclude that there is a necessarily existent being that explains why anything at all exists. The NV thinkers were not committed to the view that everything that exists has an explanation for its existence. Finally, the argument is not like familiar teleological arguments that draw on observations of biological complexity to infer that an intelligent designer exists.

That said, the NV argument does bear some resemblance to all of the above arguments. It is therefore best understood as a hybrid cosmological-teleological argument. The argument points out that certain kinds of things require an intelligent creator that has the attributes traditionally assigned to God.

Overview of the Argument

The argument can be stated as follows¹:

(P1) Everything that is an effect has an intelligent maker.

(P2) The first product is an effect.

(C)  Therefore, the first product has an intelligent maker.

The argument as spelled out here is a little different from the way the NV philosophers usually framed it, primarily because they had a more elaborate way of laying out syllogisms. But that need not concern us. The important point is that the argument is valid – if the premises are true, the conclusion does follow. But what are we to make of the premises?

Some terminological clarification is in order before we can assess the premises. By an effect, defenders of the argument refer to a composite object – i.e., an object made of parts. Buildings, rocks, mountains, human bodies are all examples of effects. Recall that NV philosophers were atomists, and since atoms are indivisible and indestructible, they do not count as effects.

The first product refers to the simplest kind of effect that can be further broken down into atoms. In the NV system, dyads – imperceptible aggregates of two atoms – were seen as the first product. But again, that need not concern us. All we need to know is that the first product is the smallest unit that is itself further divisible. We can now move on to scrutinize the premises.

Support for the Premises

P2 is necessarily true, since the first product is defined as the simplest kind of effect. Things get interesting when we consider the first premise. P1 states that every effect has an intelligent maker, where an intelligent maker is defined as an agent who:

(i) Has knowledge of the components that make up the effect;

(ii) Desires to bring about the effect; and

(iii) Wills to do so.

The obvious question then is: why believe that every effect has an intelligent maker?

The support offered for P1 is inductive. NV philosophers defend P1 by pointing out that we have a very large number of examples that confirm it. The classic example is that of a pot. We observe that pots have an intelligent maker: the potter who is aware of the material out of which the pot is made (the clay), desires to make the pot, and wills to do so. Atoms are deliberately excluded from P1 since they aren’t effects, and hence cannot be seen as counterexamples. Given this, defenders the argument claim that the numerous confirming instances (as in the case of the pot/potter) entitle us to accept P1 as a general principle.

Responding to Objections

Philosophers in the NV tradition were aware that the argument was extremely controversial, and came up with a number of interesting responses to common objections. I will go over three of them here. 

Objection 1: Counterexamples to P1

The most common objection is that there are obvious counterexamples to the first premise. Rocks, mountains, plants – these are all made of parts, and yet, don’t have a maker. Thus, P1 is false.

The NV response is to say that this objection begs the question against the theist. The mere fact that we don’t immediately observe a maker in these cases does not establish that a maker was not at least in part involved. For the maker could, after all, be spatially or temporally remote² from the effect.

NV philosophers press the point by insisting that if direct observation of the cause was necessary, then even ordinary inferences would be defeated. For instance, we wouldn’t be able to infer the presence of fire from smoke if the fire wasn’t immediately observable. But of course, the fire could be a long distance away. Similarly, if we happen to come across a pot, we wouldn’t suspend judgement about whether it was made by a potter simply because we didn’t directly and immediately observe the pot being made by one. The potter could, after all, be in a different town, or even be dead. In other words, this objection proves too much, since it would render everyday inferences that we all rely on unjustified.

Objection 2: The Possibility of Counterexamples to P1

At this point, we might be willing to concede that we can’t rule out the existence of a maker for things like rocks and mountains. However, since the maker isn’t directly observed, the theist can’t be sure that a potential counterexample doesn’t exist either. It may be true that we have observed several instances of effects that have makers, but the possibility that there exists a counterexample means that P1 is at the very least unjustified, if not shown to be false.

Once again, the NV response is that the objection proves too much. The mere possibility of a counterexample is not reason enough to give up on the first premise. Consider, again, the example of smoke and fire. The mere possibility that there may have, at some time in the distant past, or in a faraway land, been an occurrence of smoke without fire does not give us enough reason to reject general fire-from-smoke type inferences. Unless we are willing to give up on induction entirely, there is no reason to reject P1.

The skeptic is also accused of another inconsistency at this point. Why does the skeptic not doubt that material things have material causes? If someone who is skeptical of P1 came across an object they had never seen before, they probably would not doubt that the object had been made out of pre-existing matter. And yet, the support for the belief that material objects have material causes is also inductive. The skeptic must provide some principled reason to reject P1 while also believing in material causes without the reason collapsing into the first objection which has already been refuted. Since the skeptic has not done this, they have failed to show that we must not accept P1.

Objection 3: The Gap

Many arguments for theism face what is sometimes called “the gap” problem. In other words, even if these arguments establish the existence of an intelligent maker, there is no reason to think this creator has any of the attributes traditionally assigned to God. A skeptic may point out that in all the cases of intelligent makers we have observed, the makers were embodied agents. The makers were not omniscient, uncreated or eternal. So there is no reason to suppose that the argument, even if successful, gets us to God. At best, it can establish the existence of some kind of intelligent maker, but any further claims about the omniscience or eternality of the maker would not be justified, since these properties are not observed in any of the cases we discussed.

Predictably, the NV response is that the criterion for inference being proposed as part of the objection is too strong, and would defeat many of our everyday inferences. In most inferences we make, we go beyond the general cases, and can justifiably infer special characteristics depending on the context. To go back to the commonly used fire-and-smoke example, if we observe smoke rising from a mountain, we don’t merely infer that there is fire. Rather, given the specific context, we infer that there is fire that has the property of being on the mountain. In other words, it isn’t fire-in-general that is inferred, it is fire-on-the-mountain. Similarly, based on the specific context, we can conclude that the maker of the first product has certain characteristics.

Since the maker exists prior to the first product, it must be uncreated. It cannot have a body, since bodies are made of parts, and this would simply introduce a regress that would have to be terminated by a creator that is not made of parts. Thus, the maker must be disembodied and simple. Since it is simple, it cannot be destroyed by being broken down into its constituent parts, and hence must be eternal. Since it has knowledge of all the fundamental entities and how to combine them, it must be omniscient. Finally, simplicity favors a single maker over multiple agents. The intelligent maker thus has many of the attributes of the God of traditional theism.

Conclusion

The argument, if successful, does get us to a God-like being. P1 is the controversial premise, and as we have seen, NV philosophers respond to objections by essentially shifting the burden of proof on to the skeptic. This can seem like trickery, and indeed, that’s how the influential 11th century Buddhist philosopher Ratnakīrti characterized it in his work Refutation of Arguments Establishing Īśvara, which is arguably the most thorough critique of the Nyāya-Vaiśeṣika argument. Either way, it is at least not obvious that the first premise can be easily rejected, so the skeptic must do some work to justify rejecting it. I may go over Ratnakīrti’s criticisms in a future essay.

References

[1] The argument as I’ve presented it here is roughly based on Kisor Kumar Chakrabarti’s formulation in Classical Indian Philosophy of Mind: The Nyāya Dualist Tradition.

[2] The terminology I’m using is based on Parimal Patil’s translation of the original Sanskrit terms in Against a Hindu God: Buddhist Philosophy of Religion in India.