A form of Cartesianism still pervades both philosophy and common sense. The idea that we can understand a proposition without believing it is almost a dogma in contemporary thought about belief-formation. Let’s call the view that we can understand a proposition without forming a belief about it the Cartesian Model of Belief-Fixation. In direct contrast, we have the Spinozan Model of Belief-Fixation, which says that when we understand a proposition, we automatically form a belief about it.
It just seems so obvious that I can understand the proposition that the Earth is flat without believing that the Earth is flat. The Cartesian Model captures at least a decent portion of our common sense conception of the belief-formation process. However, there is experimental evidence that tells against the Cartesian Model and counts in favor of the Spinozan Model. I will provide some links to papers that explain the anti-Cartesian experimental evidence at length at the end of this post.
One form of experimental evidence against the Cartesian Model comes from the effects of cognitive load on belief-formation. The Spinozan Model takes believing and disbelieving to be outputs of different cognitive processes, so cognitive load should affect them differently, which is exactly what we see in the literature. The basic idea is that, for the Spinozan, believing a proposition is the output of an automatic, subpersonal cognitive system, whereas disbelieving a proposition requires cognitive effort on the part of the believer. So, cognitive load will affect disbelief in ways it cannot affect belief, since belief-formation is a subpersonal, automatic process.
The upshot of the Spinozan Model is that we cannot avoid believing propositions we understand. We cannot understand a proposition, suspend belief until we evaluate the evidence, and then form a belief about that proposition. The Cartesian Model captures this intuitively attractive picture of our doxastic processes very well. On the Cartesian Model, we can take the belief-formation process apart before beliefs form but after we understand a proposition. But on the Spinozan Model, we cannot detach understanding and belief.
What sorts of implications does the Spinozan Model have? Well, consider epistemology. We do not have the ability to evaluate the evidence for or reasons to believe a proposition prior to believing it, so the basing relation seems to be in trouble. We may be able to base our beliefs on our evidence in some cases, such as in perception, since the beliefs will be the automatic outputs of a cognitive system that is connected to our perceptual systems in a way that probably constitutes something resembling a basing relation between our perceptual experience and our beliefs about it. However, when we go higher-order, we seem to be able to evaluate our reasons for belief prior to forming beliefs, which is what the basing relation requires in this domain. But we cannot do this if the Spinozan Model is true. We automatically believe what we understand, so we do not necessarily base our beliefs about things on our available reasons or evidence. Another epistemic worry comes from constitutive norms of belief. If there are certain constitutive norms of belief that require things like believing for what seem to the believer to be good reasons, then the Spinozan Model runs roughshod over those norms.
Things aren’t completely bleak for the Spinozan epistemologist, though. We can still shed our beliefs through a process of doxastic deliberation. So, our beliefs can be sensitive to our available evidence or reasons, but only once we already form them and they come into contact with the rest of our web of beliefs. So, we can, through cognitive effort, disbelieve things. However, the process of disbelief be open to cognitive load effects, among other things. Cognitive load will be present in many parts of our day-to-day lives, just think of a time when you were slightly distracted by something while trying to accomplish a task. So the process of disbelieving something is not necessarily easy. But the ability to shed our beliefs opens the door to substantive epistemic theorizing within a Spinozan worldview. So all is not lost.
The Spinozan Model also has moral and political implications. For example, let’s consider a Millean Harm Principle for free speech: the speech of others should be restricted if and only if it is to prevent harm to others. The Harm Principle needs to be understood epistemically, so in terms of what people reasonably believe will prevent harm to others. So, if it is reasonable to believe that a person’s speech will harm somebody, then that person’s speech should be restricted. The question of who gets to restrict that person’s speech is a difficult one, but perhaps we can assume that it is the state, just if it is a legitimate authority. Now let’s unpack the kind of harm at play here. I won’t pretend to give a complete analysis of the sort of harm at play in this Harm Principle, but I can gesture at it with an example. People in the anti-vaccination movement spread, through their speech, various conspiracy theories and other forms of misinformation that leads people who would otherwise have vaccinated their children not to do so. The children sometimes contract diseases that would have been easily prevented with vaccines. Those diseases at least sometimes cause harm to those children. So, the speech of at least some anti-vaccination advocates leads, at least sometimes, to at least some children being harmed. I take this to be a paradigm case where it is a serious question whether we should restrict the speech of such advocates.
Now let’s bring in the Spinozan Model. If the Spinozan Model is true, then when anti-vaccination advocates post misinformation on Facebook (for example), people who read it will automatically believe it. Since those people understand those posts, they believe them. Now, such beliefs will persist in the mental systems of people who either avoid or are unaware of information that counters the anti-vaccination narrative. Some of those people will probably have children, and some of those people with children will probably not vaccinate them. The fact that it is so easy to cause other people to form beliefs with harmful downstream effects should give us pause. Perhaps, assuming that some form of the Harm Principle is true, there is a good case to be made that we should restrict certain people’s speech about certain topics. The case is only strengthened when we become Spinozans about belief-fixation.
Another thing that the Spinozan has something to say about is propaganda. If the Spinozan Model is true, then we are quite susceptible to propaganda. By inducing cognitive load effects, we become especially open to retaining beliefs based on propositions we understand. For example, news programs can induce cognitive load effects through things like news tickers at the bottom of the screen, constant news alert sounds, various graphics and effects moving around the screen, and other such things that occur while news is being read out to listeners and watchers. Those paying close attention to their screens become open to cognitive load effects, which makes disbelieving what we automatically believe especially difficult. So, we end up retaining a lot of the beliefs we form when watching the evening news. Whether this is a problem depends on the quality of the information being spread through the news outlet, but if that outlet is in the habit of putting out propaganda, then things are pretty bad.
There are surely other implications of the Spinozan Model of belief-fixation, but I’ll rest here. For those who find the model attractive, there are clearly tons of research topics ripe for the picking. For those who find the model unattractive, defending the Cartesian Model by trying to explain the experimental evidence within that framework is always an option.