Seemings Zombies

Let’s assume that seemings are sui generis propositional attitudes that have a truthlike feel. On this view, seemings are distinct mental states from beliefs and other propositional attitudes. It at least seems conceivable to me that there could be a being that has many of the same sorts of mental states that we have except for seemings. I’ll call this being a seemings zombie.

The seemings zombie never has mental states where a proposition is presented to it as true in the sense that it has a truthlike feel. Would such a being engage in philosophical theorizing if presented with the opportunity? I’m not entirely sure whether the seemings zombie would have the right sort of motivation to engage in philosophizing. If we need seemings or something similar to them to motivate philosophical theorizing, then seemings zombies won’t be motivated to do it.

But do we need seemings to motivate philosophizing? I think we might need them if philosophizing includes some sort of commitment to a particular view. What could motivate us to adopt a particular view in philosophy besides the fact that that view seems true to us? I guess we could be motivated by the wealth and fame that comes along with being a professional philosopher, but I’m skeptical.

Maybe we don’t need to adopt a particular view to philosophize. In that case we could say that seemings zombies can philosophize without anything seeming true to them. They could be curious about conceptual connections or entailments of theories articulated by the great thinkers, and that could be sufficient to move them to philosophize. I’m not sure whether or not this would qualify as philosophizing in the sense many of us are acquainted with. Even people whose careers consist of the study of a historical figure’s intellectual works seem to commit themselves to a particular view about that figure. Kant interpreters have views about what Kant thought or argued for, and my guess is those views seem true to those interpreters.

The seemings zombies might still be able to philosophize, though. Maybe they would end up as skeptics, looking down on all of us doing philosophy motivated by seemings. We seemings havers end up being motivated by mental states whose connection to the subject matter they are motivating us to take stances on are tenuous at best. The seemings zombies would then adopt skeptical attitudes towards our philosophical views. But I’m still worried, because skeptics like to give us arguments for their views about knowledge, and my guess is a lot of sincere skeptics are motivated by the fact that skepticism seems true to them. I could just be naive, though; there may be skeptics who remain uncommitted to any philosophical view, including their skepticism. I’m just not sure how that’s supposed to work.

One reaction you might have to all of this is to think that seemings zombies are incoherent or not even prima facie conceivable. That may be true, but it doesn’t seem that way to me.


 

Mental Incorrigibility and Higher Order Seemings

Suppose that the phenomenal view of seemings is true. So, for it to seem to S that P, S must have a propositional attitude towards P that comes with a truthlike feel. Now suppose that we are not infallible when it comes to our own mental states. We cannot be absolutely certain that we are in a certain mental state. So, we can make mistakes when we judge whether or not it seems to us that P.

Now put it all together. In cases where S judges that it seems to her that P, but she is mistaken, what is going on? Did it actually seem to her that P or did she mistakenly judge that it did? If it’s the former, then it is unclear to me how S could mistakenly judge that it seems to her that P. Seeming states on the phenomenal view seem to be the sorts of mental states we should be aware of when we experience them. If it's the latter, then it is unclear whether higher order seemings can solve our problem.

If a subject is experiencing a seeming state and judges that it seems to her that P, then there has to be some sort of luck going on that disconnects the seeming state from her judgment such that she does not know that it seems to her that P. Maybe she’s very distracted when she focuses her awareness onto her seeming state to form her judgment and that generates the discrepancy. I’m not really sure how plausible such a proposal would ultimately be. Instead, if the subject is not actually in a seeming state, then we need to explain what is going on when she mistakenly judges that she is in one. One possibility is that there are higher order seemings. Such seemings take first order seemings as their contents. On this view, it could seem to us that it seems that P is the case.

The idea of higher order seemings repulses me, but it could be true. Or, in a more reductionist spirit, we could say that higher order seemings are just a form of introspective awareness of our first order seemings. But I am worried that such a proposal would reintroduce the original problem linked to fallibility. If I can mistakenly judge that it seems to be that it seems to me that P, then what is going on with that higher order (introspective) seeming? The issue seems to come back to bite us in the ass. But it might do that on any proposal about higher order seemings, assuming we have accepted that we are not infallible mental state detectors. Maybe we just need to accept a regress of seemings, or maybe we should stop talking about them. Like always, I’ll just throw my hands up in the air and get distracted by a different issue rather than come up with a concrete solution.

Costs, Benefits, and the Value of Philosophical Goods

Philosophy is a very diverse field in which practitioners employ different dialectical and rhetorical techniques to advance their views and critique those of their opponents. But despite the heterogeneity, there seems to me to be a prevailing attitude towards the overarching method by which we choose philosophical theories among contemporary philosophers. We are all supposed to acknowledge that there are no knockdown arguments in philosophy. Philosophical theories are not the sorts of beasts that can be vanquished with a quick deductive argument, mostly because there is no set of rules that we can use to determine if a theory has been refuted. Any proposed set of rules will be open to challenge by those who doubt them, and there will be no obvious way to determine who wins that fight.

So, the process by which we compare and choose among philosophical theories cannot be guided by which theories can be refuted by knockdown arguments, but rather we seem to be engaged in a form of cost-benefit analysis when we do philosophy. We look at a theory and consider whether its overall value outweighs that of its competitors, and then we adopt that theory rather than the others. One way of spelling this process out is in terms of reflective equilibrium; we consider the parts of the theories we are comparing and the intuitions we have about the subject matter that the theories are about, and then we weigh those parts and intuitions against each other. Once we reach some sort of state of equilibrium among our intuitions and the parts that compose the theory we’re considering adopting, we can be justified in believing that theory.

Reflective equilibrium seems to be the metaphilosopher’s dream, since it avoids the problems that plague the knockdown argument approach to theory selection, and it makes room for some level of reasonable disagreement among practitioners, since not everybody has the same intuitions, and the intuitions shared among colleagues may vary in strength (in both intrapersonal and interpersonal senses). Unfortunately for me, I worry a lot about how reliable our methods are at getting us to the truth, and the process I crudely spelled out above does not strike me as satisfactory.

To be brief, my concern is that we have no clear way of determining the values of the things we are trading when we do a philosophical cost-benefit analysis. In other cases of cost-benefit analyses, it seems obvious to me that we can make satisfactory judgments in light of the values of the goods we’re trading. If I buy a candy bar at the store on my way to work, I can (at least on reflection) determine that certain considerations clearly count in favor of purchasing the candy bar and others clearly count against it. But when I weigh intuitions against parts of theories and parts of theories against each other, I begin to lose my grasp on what the exchange rate is. How do I know when to trade off an intuition that really impresses itself upon me for a theory with simpler parts? Exactly how intense must an intuition be before it becomes practically non-negotiable when doing a philosophical cost-benefit analysis? Questions like these throw me for a loop, especially when I’m in a metaphysical realist mood. Perhaps anti-realists will have an easier time coping with this, but those sorts of views never satisfied me, because there are parts of philosophy (like some areas in metaphysics) that never really struck me as open to a complete anti-realist analysis, so at least for me global anti-realism is off the table. At the moment, I’m completely puzzled.