The Spinozan Model of Belief-Fixation

A form of Cartesianism still pervades both philosophy and common sense. The idea that we can understand a proposition without believing it is almost a dogma in contemporary thought about belief-formation. Let’s call the view that we can understand a proposition without forming a belief about it the Cartesian Model of Belief-Fixation. In direct contrast, we have the Spinozan Model of Belief-Fixation, which says that when we understand a proposition, we automatically form a belief about it.

It just seems so obvious that I can understand the proposition that the Earth is flat without believing that the Earth is flat. The Cartesian Model captures at least a decent portion of our common sense conception of the belief-formation process. However, there is experimental evidence that tells against the Cartesian Model and counts in favor of the Spinozan Model.  I will provide some links to papers that explain the anti-Cartesian experimental evidence at length at the end of this post.

One form of experimental evidence against the Cartesian Model comes from the effects of cognitive load on belief-formation. The Spinozan Model takes believing and disbelieving to be outputs of different cognitive processes, so cognitive load should affect them differently, which is exactly what we see in the literature. The basic idea is that, for the Spinozan, believing a proposition is the output of an automatic, subpersonal cognitive system, whereas disbelieving a proposition requires cognitive effort on the part of the believer. So, cognitive load will affect disbelief in ways it cannot affect belief, since belief-formation is a subpersonal, automatic process.

The upshot of the Spinozan Model is that we cannot avoid believing propositions we understand. We cannot understand a proposition, suspend belief until we evaluate the evidence, and then form a belief about that proposition. The Cartesian Model captures this intuitively attractive picture of our doxastic processes very well. On the Cartesian Model, we can take the belief-formation process apart before beliefs form but after we understand a proposition. But on the Spinozan Model, we cannot detach understanding and belief.

What sorts of implications does the Spinozan Model have? Well, consider epistemology. We do not have the ability to evaluate the evidence for or reasons to believe a proposition prior to believing it, so the basing relation seems to be in trouble. We may be able to base our beliefs on our evidence in some cases, such as in perception, since the beliefs will be the automatic outputs of a cognitive system that is connected to our perceptual systems in a way that probably constitutes something resembling a basing relation between our perceptual experience and our beliefs about it. However, when we go higher-order, we seem to be able to evaluate our reasons for belief prior to forming beliefs, which is what the basing relation requires in this domain. But we cannot do this if the Spinozan Model is true. We automatically believe what we understand, so we do not necessarily base our beliefs about things on our available reasons or evidence. Another epistemic worry comes from constitutive norms of belief. If there are certain constitutive norms of belief that require things like believing for what seem to the believer to be good reasons, then the Spinozan Model runs roughshod over those norms.

Things aren’t completely bleak for the Spinozan epistemologist, though. We can still shed our beliefs through a process of doxastic deliberation. So, our beliefs can be sensitive to our available evidence or reasons, but only once we already form them and they come into contact with the rest of our web of beliefs. So, we can, through cognitive effort, disbelieve things. However, the process of disbelief be open to cognitive load effects, among other things. Cognitive load will be present in many parts of our day-to-day lives, just think of a time when you were slightly distracted by something while trying to accomplish a task. So the process of disbelieving something is not necessarily easy. But the ability to shed our beliefs opens the door to substantive epistemic theorizing within a Spinozan worldview. So all is not lost.

The Spinozan Model also has moral and political implications. For example, let’s consider a Millean Harm Principle for free speech: the speech of others should be restricted if and only if it is to prevent harm to others. The Harm Principle needs to be understood epistemically, so in terms of what people reasonably believe will prevent harm to others. So, if it is reasonable to believe that a person’s speech will harm somebody, then that person’s speech should be restricted. The question of who gets to restrict that person’s speech is a difficult one, but perhaps we can assume that it is the state, just if it is a legitimate authority. Now let’s unpack the kind of harm at play here. I won’t pretend to give a complete analysis of the sort of harm at play in this Harm Principle, but I can gesture at it with an example. People in the anti-vaccination movement spread, through their speech, various conspiracy theories and other forms of misinformation that leads people who would otherwise have vaccinated their children not to do so. The children sometimes contract diseases that would have been easily prevented with vaccines. Those diseases at least sometimes cause harm to those children. So, the speech of at least some anti-vaccination advocates leads, at least sometimes, to at least some children being harmed. I take this to be a paradigm case where it is a serious question whether we should restrict the speech of such advocates.

Now let’s bring in the Spinozan Model. If the Spinozan Model is true, then when anti-vaccination advocates post misinformation on Facebook (for example), people who read it will automatically believe it. Since those people understand those posts, they believe them. Now, such beliefs will persist in the mental systems of people who either avoid or are unaware of information that counters the anti-vaccination narrative. Some of those people will probably have children, and some of those people with children will probably not vaccinate them. The fact that it is so easy to cause other people to form beliefs with harmful downstream effects should give us pause. Perhaps, assuming that some form of the Harm Principle is true, there is a good case to be made that we should restrict certain people’s speech about certain topics. The case is only strengthened when we become Spinozans about belief-fixation.

Another thing that the Spinozan has something to say about is propaganda. If the Spinozan Model is true, then we are quite susceptible to propaganda. By inducing cognitive load effects, we become especially open to retaining beliefs based on propositions we understand. For example, news programs can induce cognitive load effects through things like news tickers at the bottom of the screen, constant news alert sounds, various graphics and effects moving around the screen, and other such things that occur while news is being read out to listeners and watchers. Those paying close attention to their screens become open to cognitive load effects, which makes disbelieving what we automatically believe especially difficult. So, we end up retaining a lot of the beliefs we form when watching the evening news. Whether this is a problem depends on the quality of the information being spread through the news outlet, but if that outlet is in the habit of putting out propaganda, then things are pretty bad.

There are surely other implications of the Spinozan Model of belief-fixation, but I’ll rest here. For those who find the model attractive, there are clearly tons of research topics ripe for the picking. For those who find the model unattractive, defending the Cartesian Model by trying to explain the experimental evidence within that framework is always an option.

Further reading:

How Mental Systems Believe

Thinking is Believing

You Can’t Not Believe Everything You Read


What I'm Currently Working On

I haven’t uploaded anything to this blog in a while so I figured I would post a brief overview of what I’ve been thinking about and working on. I should start regularly uploading normal blog posts soon.

My current research is almost entirely based on a theory of belief formation and its implications for epistemology, rationality, and Streumer’s argument that we can’t believe a global normative error theory.

The theory of belief formation that I’m working with is called the Spinozan theory. The theory is situated as an alternative to the Cartesian theory of belief formation. The Spinozan theory says that we automatically form a belief that p whenever we consider that p. This means that the process of belief formation is automatic and outside of our conscious control. This theory has serious implications for several areas, such as rationality and epistemology.

In terms of epistemology, lots of philosophers working in that area will talk about belief formation in ways that presuppose a Cartesian theory. The Cartesian theory says that the process of belief formation and the process of belief revision are on par; both are within our conscious control. When we form a belief we base it on considerations like evidence. We consider the evidence for and against the proposition and then we form a belief. However, if the Spinozan theory is true then this is a misrepresentation of how we actually form beliefs. According to the Spinozan, we automatically form a belief whenever we consider a proposition. We may be able to revise our beliefs with conscious effort, but that process requires more mental energy than the process of forming a belief. If the Spinozan is right, we need to investigate whether or not we can do without talk of control over belief formation in epistemology.

The Spinozan theory entails that we believe lots of contradictory things. That we believe lots of contradictory things runs contrary to our ordinary view of ourselves as relatively rational creatures who do their best not to hold inconsistent beliefs. If any plausible account of rationality requires at least a lot of consistency among our beliefs, then we’re pretty screwed. But we might be able to work with a revisionary account of rationality that sees being rational as a constant process of pruning the contradictory beliefs from one’s mind through counterevidence. The problem with that sort of account, though, is that belief revision is an effortful process that is sensitive to cognitive load effects, whereas belief formation is automatic will occur whenever one considers a proposition. So, we’ll basically be on a rationality treadmill, especially in our current society where we’re bombarded with things that induce cognitive load effects.

Another project that I’m going to start working on is applying the Spinozan theory to propaganda. I think that somebody interested in designing very effective propaganda should utilize the Spinozan theory. For example, knowing that belief formation is automatic and occurs whenever a person considers a proposition would help one design some pretty effective propaganda, since one’s beliefs can root themselves in their mental processes such that they influence one’s behavior over time. If you throw in some cognitive load enhancing effects then you can make it more difficult for people to resist keeping their newly formed beliefs.

The last project I’m currently working on is a paper in which I argue against Bart Streumer’s case against believing the error theory. According to Streumer, one cannot believe a global normative error theory because one would believe that one has no reason to believe it, which we can’t do according to him. I think that if we work with the Spinozan theory then this is clearly false, since we automatically form beliefs about things that we have no reason to believe. My guess is that proponents of Streumer’s view will push back by arguing that they are talking about something different than I am when they use the word, “belief”. But I think that the Spinozan theory tracks the non-negotiable features of our ordinary conceptions of belief enough to qualify as an account of belief in the ordinary sense.

For those interested in the Spinozan theory, click this link. I should be regularly uploading posts here soon.


Mental Incorrigibility and Higher Order Seemings

Suppose that the phenomenal view of seemings is true. So, for it to seem to S that P, S must have a propositional attitude towards P that comes with a truthlike feel. Now suppose that we are not infallible when it comes to our own mental states. We cannot be absolutely certain that we are in a certain mental state. So, we can make mistakes when we judge whether or not it seems to us that P.

Now put it all together. In cases where S judges that it seems to her that P, but she is mistaken, what is going on? Did it actually seem to her that P or did she mistakenly judge that it did? If it’s the former, then it is unclear to me how S could mistakenly judge that it seems to her that P. Seeming states on the phenomenal view seem to be the sorts of mental states we should be aware of when we experience them. If it's the latter, then it is unclear whether higher order seemings can solve our problem.

If a subject is experiencing a seeming state and judges that it seems to her that P, then there has to be some sort of luck going on that disconnects the seeming state from her judgment such that she does not know that it seems to her that P. Maybe she’s very distracted when she focuses her awareness onto her seeming state to form her judgment and that generates the discrepancy. I’m not really sure how plausible such a proposal would ultimately be. Instead, if the subject is not actually in a seeming state, then we need to explain what is going on when she mistakenly judges that she is in one. One possibility is that there are higher order seemings. Such seemings take first order seemings as their contents. On this view, it could seem to us that it seems that P is the case.

The idea of higher order seemings repulses me, but it could be true. Or, in a more reductionist spirit, we could say that higher order seemings are just a form of introspective awareness of our first order seemings. But I am worried that such a proposal would reintroduce the original problem linked to fallibility. If I can mistakenly judge that it seems to be that it seems to me that P, then what is going on with that higher order (introspective) seeming? The issue seems to come back to bite us in the ass. But it might do that on any proposal about higher order seemings, assuming we have accepted that we are not infallible mental state detectors. Maybe we just need to accept a regress of seemings, or maybe we should stop talking about them. Like always, I’ll just throw my hands up in the air and get distracted by a different issue rather than come up with a concrete solution.