The Spinozan Model of Belief-Fixation

A form of Cartesianism still pervades both philosophy and common sense. The idea that we can understand a proposition without believing it is almost a dogma in contemporary thought about belief-formation. Let’s call the view that we can understand a proposition without forming a belief about it the Cartesian Model of Belief-Fixation. In direct contrast, we have the Spinozan Model of Belief-Fixation, which says that when we understand a proposition, we automatically form a belief about it.

It just seems so obvious that I can understand the proposition that the Earth is flat without believing that the Earth is flat. The Cartesian Model captures at least a decent portion of our common sense conception of the belief-formation process. However, there is experimental evidence that tells against the Cartesian Model and counts in favor of the Spinozan Model.  I will provide some links to papers that explain the anti-Cartesian experimental evidence at length at the end of this post.

One form of experimental evidence against the Cartesian Model comes from the effects of cognitive load on belief-formation. The Spinozan Model takes believing and disbelieving to be outputs of different cognitive processes, so cognitive load should affect them differently, which is exactly what we see in the literature. The basic idea is that, for the Spinozan, believing a proposition is the output of an automatic, subpersonal cognitive system, whereas disbelieving a proposition requires cognitive effort on the part of the believer. So, cognitive load will affect disbelief in ways it cannot affect belief, since belief-formation is a subpersonal, automatic process.

The upshot of the Spinozan Model is that we cannot avoid believing propositions we understand. We cannot understand a proposition, suspend belief until we evaluate the evidence, and then form a belief about that proposition. The Cartesian Model captures this intuitively attractive picture of our doxastic processes very well. On the Cartesian Model, we can take the belief-formation process apart before beliefs form but after we understand a proposition. But on the Spinozan Model, we cannot detach understanding and belief.

What sorts of implications does the Spinozan Model have? Well, consider epistemology. We do not have the ability to evaluate the evidence for or reasons to believe a proposition prior to believing it, so the basing relation seems to be in trouble. We may be able to base our beliefs on our evidence in some cases, such as in perception, since the beliefs will be the automatic outputs of a cognitive system that is connected to our perceptual systems in a way that probably constitutes something resembling a basing relation between our perceptual experience and our beliefs about it. However, when we go higher-order, we seem to be able to evaluate our reasons for belief prior to forming beliefs, which is what the basing relation requires in this domain. But we cannot do this if the Spinozan Model is true. We automatically believe what we understand, so we do not necessarily base our beliefs about things on our available reasons or evidence. Another epistemic worry comes from constitutive norms of belief. If there are certain constitutive norms of belief that require things like believing for what seem to the believer to be good reasons, then the Spinozan Model runs roughshod over those norms.

Things aren’t completely bleak for the Spinozan epistemologist, though. We can still shed our beliefs through a process of doxastic deliberation. So, our beliefs can be sensitive to our available evidence or reasons, but only once we already form them and they come into contact with the rest of our web of beliefs. So, we can, through cognitive effort, disbelieve things. However, the process of disbelief be open to cognitive load effects, among other things. Cognitive load will be present in many parts of our day-to-day lives, just think of a time when you were slightly distracted by something while trying to accomplish a task. So the process of disbelieving something is not necessarily easy. But the ability to shed our beliefs opens the door to substantive epistemic theorizing within a Spinozan worldview. So all is not lost.

The Spinozan Model also has moral and political implications. For example, let’s consider a Millean Harm Principle for free speech: the speech of others should be restricted if and only if it is to prevent harm to others. The Harm Principle needs to be understood epistemically, so in terms of what people reasonably believe will prevent harm to others. So, if it is reasonable to believe that a person’s speech will harm somebody, then that person’s speech should be restricted. The question of who gets to restrict that person’s speech is a difficult one, but perhaps we can assume that it is the state, just if it is a legitimate authority. Now let’s unpack the kind of harm at play here. I won’t pretend to give a complete analysis of the sort of harm at play in this Harm Principle, but I can gesture at it with an example. People in the anti-vaccination movement spread, through their speech, various conspiracy theories and other forms of misinformation that leads people who would otherwise have vaccinated their children not to do so. The children sometimes contract diseases that would have been easily prevented with vaccines. Those diseases at least sometimes cause harm to those children. So, the speech of at least some anti-vaccination advocates leads, at least sometimes, to at least some children being harmed. I take this to be a paradigm case where it is a serious question whether we should restrict the speech of such advocates.

Now let’s bring in the Spinozan Model. If the Spinozan Model is true, then when anti-vaccination advocates post misinformation on Facebook (for example), people who read it will automatically believe it. Since those people understand those posts, they believe them. Now, such beliefs will persist in the mental systems of people who either avoid or are unaware of information that counters the anti-vaccination narrative. Some of those people will probably have children, and some of those people with children will probably not vaccinate them. The fact that it is so easy to cause other people to form beliefs with harmful downstream effects should give us pause. Perhaps, assuming that some form of the Harm Principle is true, there is a good case to be made that we should restrict certain people’s speech about certain topics. The case is only strengthened when we become Spinozans about belief-fixation.

Another thing that the Spinozan has something to say about is propaganda. If the Spinozan Model is true, then we are quite susceptible to propaganda. By inducing cognitive load effects, we become especially open to retaining beliefs based on propositions we understand. For example, news programs can induce cognitive load effects through things like news tickers at the bottom of the screen, constant news alert sounds, various graphics and effects moving around the screen, and other such things that occur while news is being read out to listeners and watchers. Those paying close attention to their screens become open to cognitive load effects, which makes disbelieving what we automatically believe especially difficult. So, we end up retaining a lot of the beliefs we form when watching the evening news. Whether this is a problem depends on the quality of the information being spread through the news outlet, but if that outlet is in the habit of putting out propaganda, then things are pretty bad.

There are surely other implications of the Spinozan Model of belief-fixation, but I’ll rest here. For those who find the model attractive, there are clearly tons of research topics ripe for the picking. For those who find the model unattractive, defending the Cartesian Model by trying to explain the experimental evidence within that framework is always an option.

Further reading:

How Mental Systems Believe

Thinking is Believing

You Can’t Not Believe Everything You Read


Seemings Zombies

Let’s assume that seemings are sui generis propositional attitudes that have a truthlike feel. On this view, seemings are distinct mental states from beliefs and other propositional attitudes. It at least seems conceivable to me that there could be a being that has many of the same sorts of mental states that we have except for seemings. I’ll call this being a seemings zombie.

The seemings zombie never has mental states where a proposition is presented to it as true in the sense that it has a truthlike feel. Would such a being engage in philosophical theorizing if presented with the opportunity? I’m not entirely sure whether the seemings zombie would have the right sort of motivation to engage in philosophizing. If we need seemings or something similar to them to motivate philosophical theorizing, then seemings zombies won’t be motivated to do it.

But do we need seemings to motivate philosophizing? I think we might need them if philosophizing includes some sort of commitment to a particular view. What could motivate us to adopt a particular view in philosophy besides the fact that that view seems true to us? I guess we could be motivated by the wealth and fame that comes along with being a professional philosopher, but I’m skeptical.

Maybe we don’t need to adopt a particular view to philosophize. In that case we could say that seemings zombies can philosophize without anything seeming true to them. They could be curious about conceptual connections or entailments of theories articulated by the great thinkers, and that could be sufficient to move them to philosophize. I’m not sure whether or not this would qualify as philosophizing in the sense many of us are acquainted with. Even people whose careers consist of the study of a historical figure’s intellectual works seem to commit themselves to a particular view about that figure. Kant interpreters have views about what Kant thought or argued for, and my guess is those views seem true to those interpreters.

The seemings zombies might still be able to philosophize, though. Maybe they would end up as skeptics, looking down on all of us doing philosophy motivated by seemings. We seemings havers end up being motivated by mental states whose connection to the subject matter they are motivating us to take stances on are tenuous at best. The seemings zombies would then adopt skeptical attitudes towards our philosophical views. But I’m still worried, because skeptics like to give us arguments for their views about knowledge, and my guess is a lot of sincere skeptics are motivated by the fact that skepticism seems true to them. I could just be naive, though; there may be skeptics who remain uncommitted to any philosophical view, including their skepticism. I’m just not sure how that’s supposed to work.

One reaction you might have to all of this is to think that seemings zombies are incoherent or not even prima facie conceivable. That may be true, but it doesn’t seem that way to me.


 

Costs, Benefits, and the Value of Philosophical Goods

Philosophy is a very diverse field in which practitioners employ different dialectical and rhetorical techniques to advance their views and critique those of their opponents. But despite the heterogeneity, there seems to me to be a prevailing attitude towards the overarching method by which we choose philosophical theories among contemporary philosophers. We are all supposed to acknowledge that there are no knockdown arguments in philosophy. Philosophical theories are not the sorts of beasts that can be vanquished with a quick deductive argument, mostly because there is no set of rules that we can use to determine if a theory has been refuted. Any proposed set of rules will be open to challenge by those who doubt them, and there will be no obvious way to determine who wins that fight.

So, the process by which we compare and choose among philosophical theories cannot be guided by which theories can be refuted by knockdown arguments, but rather we seem to be engaged in a form of cost-benefit analysis when we do philosophy. We look at a theory and consider whether its overall value outweighs that of its competitors, and then we adopt that theory rather than the others. One way of spelling this process out is in terms of reflective equilibrium; we consider the parts of the theories we are comparing and the intuitions we have about the subject matter that the theories are about, and then we weigh those parts and intuitions against each other. Once we reach some sort of state of equilibrium among our intuitions and the parts that compose the theory we’re considering adopting, we can be justified in believing that theory.

Reflective equilibrium seems to be the metaphilosopher’s dream, since it avoids the problems that plague the knockdown argument approach to theory selection, and it makes room for some level of reasonable disagreement among practitioners, since not everybody has the same intuitions, and the intuitions shared among colleagues may vary in strength (in both intrapersonal and interpersonal senses). Unfortunately for me, I worry a lot about how reliable our methods are at getting us to the truth, and the process I crudely spelled out above does not strike me as satisfactory.

To be brief, my concern is that we have no clear way of determining the values of the things we are trading when we do a philosophical cost-benefit analysis. In other cases of cost-benefit analyses, it seems obvious to me that we can make satisfactory judgments in light of the values of the goods we’re trading. If I buy a candy bar at the store on my way to work, I can (at least on reflection) determine that certain considerations clearly count in favor of purchasing the candy bar and others clearly count against it. But when I weigh intuitions against parts of theories and parts of theories against each other, I begin to lose my grasp on what the exchange rate is. How do I know when to trade off an intuition that really impresses itself upon me for a theory with simpler parts? Exactly how intense must an intuition be before it becomes practically non-negotiable when doing a philosophical cost-benefit analysis? Questions like these throw me for a loop, especially when I’m in a metaphysical realist mood. Perhaps anti-realists will have an easier time coping with this, but those sorts of views never satisfied me, because there are parts of philosophy (like some areas in metaphysics) that never really struck me as open to a complete anti-realist analysis, so at least for me global anti-realism is off the table. At the moment, I’m completely puzzled.

Why Verificationism isn't Self-Refuting

In the early to mid Twentieth Century, there was a philosophical movement stemming from Austria that aimed to do away with metaphysics. The movement has come to be called Logical Positivism or Logical Empiricism, and it is widely seen as a discredited research program in philosophy (among other fields). One of the often repeated reasons that Logical Empiricism is untenable is that the criterion the positivists employed to demarcate the meaningful from the meaningless, when applied to itself, is meaningless, and therefore it refutes itself. In this post, I aim to show that the positivists’ criterion does not result in self-refutation.

Doing away with metaphysics is a rather ambiguous aim. One can take it to mean that we ought to rid universities of metaphysicians, encourage people to cease writing and publishing books and papers on the topic, and adjust our natural language such that it does not commit us to metaphysical claims. Another method of doing away with metaphysics is by discrediting it as an area of study. Logical Positivists saw the former interpretation of their aim as an eventual outgrowth of the latter interpretation. The positivists generally took their immediate goal to be discrediting metaphysics as a field of study, and probably hoped that the latter goal of removing metaphysics from the academy would follow.

Discrediting metaphysics can be a difficult task. The positivists’ strategy was to target the language used in expressing metaphysical theses. If the language that metaphysicians employed was only apparently meaningful, but underneath the surface it was cognitively meaningless, then the language of metaphysics would consist of meaningless utterances. Cognitive meaning consists of a statement being truth-apt, or having truth conditions. If a statement isn’t truth-apt, then it is cognitively meaningless, but it can serve other linguistic functions besides assertion (e.g. ordering somebody to do something isn’t truth-apt, but it has a linguistic function).

If metaphysics is a discourse that purports to be in the business of assertion, yet it consists entirely of cognitively meaningless statements, then it is a failure as a field of study. But how did the positivists aim to demonstrate that metaphysics is a cognitively meaningless enterprise? The answer is by providing a criterion to demarcate cognitively meaningful statements from cognitively meaningless statements.

The positivists were enamored with Hume’s fork, which is the distinction between relations of ideas and matters of fact, or, in Kant’s terminology, the analytic and the synthetic. The distinction was applied to all cognitively meaningful statements. So, for any cognitively meaningful statement, it is necessarily the case that it is either analytic or synthetic (but not both). The positivists took the criterion of analyticity to be a statement’s negation entailing a contradiction. Anything whose negation does not entail a contradiction would be synthetic. Analytic statements, for the positivists, were not about extra-linguistic reality, but instead were about concepts and definitions (and maybe rules). Any claim about extra-linguistic reality was synthetic, and any synthetic claim was about extra-linguistic reality.

Synthetic statements were taken to be cognitively meaningful just if they could be empirically confirmed. The only other cognitively meaningful statements for the positivists were analytic statements and contradictions. This is an informal statement of the verificationist criterion for meaningfulness. Verificationism was the way that the positivists discredited metaphysics as a cognitively meaningless discipline. If metaphysics consisted of synthetic statements that could not be empirically confirmed (e.g. the nature of possible worlds), then metaphysics consisted of cognitively meaningless statements. In short, the positivists took a non-cognitivist interpretation of the language used in metaphysics.    

Conventional wisdom says that verificationism, when applied to itself, results in self-refutation, which means that the positivists’ project is an utter failure. But why does it result in self-refutation? One reason is that it is either analytic or synthetic, but it doesn’t appear to be analytic, so it must be synthetic. But if the verificationist criterion is synthetic, then it must be empirically confirmable. Unfortunately, verificationism is not empirically confirmable, so it is cognitively meaningless. Verificationism, then, is in the same boat with metaphysics.

Fortunately for the positivists, the argument above fails. First off, there are ways to interpret verificationism such that it is subject to empirical confirmation. Verificationism could express a thesis that aims to capture or explicate the ordinary concept of meaning (Surovell 2013). If it aims to capture the ordinary concept of meaning, then it could be confirmed by studying how users of the concept MEANING could employ it in discourse. If such concept users employ the concept in the way the verificationist criterion says it does, then it is confirmed. So, given that understanding of verificationism, it is cognitively meaningful. If verificationism aims to explicate the ordinary concept of meaning, then it would be allowed more leeway when it deviates from standard usage of ordinary concept in light of its advantages within a comprehensive theory (Surovell 2013). Verificationism construed as an explication of the ordinary concept of meaning, then, would be subject to empirical confirmation if the overall theory it contributes to is confirmed.

Secondly, if one takes the position traditionally attributed to Carnap, then one can say that the verificationist criterion is not internal to a language, but external. It is a recommendation to use language in a particular way that admits of only empirically confirmable, analytic, and contradictory statements. Recommendations are not truth-apt, yet they serve important linguistic functions. So, verificationism may be construed non-cognitively, as a recommendation motivated by pragmatic reasons. There’s nothing self-refuting about that.  

Lastly, one could take verificationism to be internal to a language, in Carnap’s sense, and analytic. However, the criterion would not aim to capture the ordinary notion of meaning, but instead it would be a replacement of that notion. Carnap appears to endorse this way of construing verificationism in the following passage,

“It would be advisable to avoid the terms ‘meaningful’ and ‘meaningless’ in this and in similar discussions . . . and to replace them with an expression of the form “a . . . sentence of L”; expressions of this form will then refer to a specified language and will contain at the place ‘. . .’ an adjective which indicates the methodological character of the sentence, e.g. whether or not that sentence (and its negation) is verifiable or completely or incompletely confirmable or completely or incompletely testable and the like, according to what is intended by ‘meaningful’” (Carnap 1936).

Rather than documenting the way ordinary users of language deploy the concept MEANING, Carnap appears to be proposing a replacement for the ordinary concept of meaning. The statement of verificationism is internal to the language in which expressions of meaning are replaced with “a . . . sentence of L” where ‘. . .’ is an adjective that indicates whether or not the sentence is verifiable, and thus is analytic in that language. The motivation for adopting verificationism thus construed would then be dependent on the theoretical and pragmatic advantages of using that language.

So, verificationism can be construed as synthetic, analytic, or cognitively meaningless. It could be considered a recommendation to use language in a certain way, and that recommendation is then motivated by pragmatic reasons (or other reasons), which makes it cognitively meaningless but linguistically useful, which does not result in self-refutation. Or, it could be considered a conventional definition aimed to capture or explicate the ordinary concept of meaning. It would then be verifiable because it could be confirmed by an empirical investigation into the way people use the ordinary notion of meaning, or by its overall theoretical merits. Lastly, it could be internal to a language, and thus analytic, but not an attempt at capturing the ordinary notion of meaning. Instead, it would be a replacement that served a particular function within a particular language that is itself chosen for pragmatic (non-cognitive) reasons. In any of these construals, verificationism is not self-refuting.

Works Cited:

Carnap, Rudolf. "Testability and Meaning - Continued." Philosophy of Science. 1936. Web.

Surovell, Jonathan. "Carnap’s Response to the Charge that Verificationism is Self-Undermining." 2013. Web.

 

A Problem for the New Consequentialism

In a previous post, I outlined a non-deontic form of consequentialism that was supposed to avoid what I called the extension problem. The extension problem plagues deontic consequentialism, which is the view that the rightness, wrongness, permissibility, and impermissibility of actions are determined by their consequences. So, a simple hedonistic act utilitarian will say that there is one categorically binding duty, and that is to maximize pleasure when we act. But such a view suffers from intuitively compelling counterexamples. So it seems like hedonistic act utilitarianism gets the extension of our deontic concepts wrong.

Non-deontic consequentialism is designed to avoid the extension problem, because it defers how those concepts are applied by a society at a given time. By doing so, the theory allows for the extensions of our deontic concepts to pick out what our society takes them to be, which seems to preserve our intuitions about particular cases, like the drifter being killed by a surgeon for his organs. Hedonistic act utilitarianism requires that, if the surgeon is in the epistemic situation where he can rule out negative consequences, and he knows that he can use these organs to save five patients, then he is duty-bound to kill the drifter and harvest the organs. Non-deontic consequentialism avoids this because your typical person who is not a thoroughly committed act utilitarian would not agree that the extension of DUTY covers the surgeon’s organ harvesting endeavor.

An alternative that avoids the extension problem is scalar utilitarianism, which does without deontic concepts like RIGHT and WRONG. Instead, we judge actions as better or worse than available alternatives. The problem with this view is that it just seems obvious that it is wrong to torture puppies for fun. But a scalar utilitarian cannot give an adequate account of what makes that act wrong, so she must explain why it seems so obvious to say that it is wrong to torture puppies, even though it’s false.

Setting aside both of these forms of consequentialism, I want to discuss the non-deontic consequentialism I outlined in my other post. On the view I described, the rightness and wrongness, along with other deontic properties, of actions are a function of the social conventions that obtain at a given time in a given society. The consequentialism comes in at the level of critiquing and improving those social conventions.

Moral progress occurs when we adopt social conventions that are better by consequentialist standards. So, for instance, it used to be a social convention in the United States that we could have property rights over other human beings, and transfer those rights for currency. Those conventions are no longer in place in the United States, and at the time they were, they could have been critiqued by consequentialist standards. Those conventions were not better than available alternatives at the time, so it would have been better not to have the institution of chattel slavery. But these facts about betterness do not determine what is right or wrong. Rather, they should guide efforts to improve social conventions, and thereby change the extensions of our deontic concepts.

This seems all well and good, but I am a bit worried. This view entails that social conventions have normative force, no matter what. So, just because something is a social convention, we thereby have at least some moral reason to abide by it. Take slavery again; such an institution was once enshrined in many social conventions. Does it follow that at the time, everybody had at least some moral reason to abide by the conventions that said we ought to return escaped slaves to their so-called owners? It seems to me that slavery is and always was wrong. There was never a time at which it was right to own another human being. I think that the basis of my concern is that deontic judgments, especially when applied to important things like slavery, are not indexed to times and places. Just because a human being is sold in a marketplace in 1790 Virginia does not change the deontic status of the situation. What exactly is the morally relevant difference between that time period and today? Why is it wrong now to sell another human being but it was not in 1790s Virginia?

One potential response to my worries is to point out that I’m making these judgments from a particular time period when the extension of our deontic concepts rules out slavery being permissible. So, perhaps I find the entailment of this theory appalling because my intuitions are shaped by the extension of the deontic concepts I use. Since 1790s Virginia, we have undergone moral progress, and now it is wrong to own slaves because of the shift in social conventions. It could even be that according to our deontic concepts’ extensions now, it was wrong in the 1790s to buy and sell slaves.

I think these considerations certainly make my concerns less worrisome. But I’m experiencing a residual anxiety. It still seems counterintuitive to say that, if we had grown up in 1790s Virginia, our claims about the rightness and wrongness would be flipped. We would have an inverted moral spectrum when it comes to deontic judgments about slavery. That is what I find counterintuitive. The theory was developed to explicitly address the extension problem, which was that deontic consequentialists seem to get the extensions of our deontic concepts wrong. The reason I think that they get those extensions wrong is because their theories entail counterintuitive results. They end up having to bite a lot of bullets, such as the organ harvesting surgeon. But if non-deontic consequentialism also generates counterintuitive entailments, like slavery being permissible in 1790s Virginia for people at that time, then is it any better than its deontic consequentialist competitors?