December 29, 2010

The Life Akratic With Gideon Rosen

I'm posting this discussion here because my blog seems to be a forum conveniens.

DG
Do you think people are inculpable for self-serving, mistaken moral judgments?

Me
I think it depends on why one made the mistake. Do you mean an honest (i.e., trying to do the right thing) mistaken moral judgment that happens to be self-serving? If so, the question seems to be whether the honest mistake was also acceptable and not negligent or the like.

That said, as you know, I don’t think there’s a satisfactory “model” for moral blame. I’m down with the Gideon Rosen stuff (which has a lot in common with the “basic argument” that Strawson outlined in The Stone but does not rely on determinism), which concludes that we should be skeptics about moral responsibility because all allegedly culpable acts presumably stem ultimately from nonculpable ignorance of some sort. Then there are alternatives such as your person-based position, which I haven’t gotten around to really thinking about (I still need to reread that long email you sent awhile back and read a handful of articles before I feel comfortable making any claims about it).

Speaking of reading things, I did very well on this paper I wrote for a philosophy seminar. I remember feeling confused upon rereading it, thinking I got mixed up somewhere, maybe got tautological. But it has at least some merit and is relevant to your question. I’d appreciate your thoughts.

DG
I suppose I think the existence of a self-serving tendency in mistaken judgments undermines the idea of an “honest” mistake; don’t you?

I think the tendency of someone’s mistakes to be convenient is a counterexample to the putative principle that mistaken judgments are non-culpable (which I take to be a premise of Rosen’s view, though I am not that well-versed in his views).

Me
On the traditional notion of moral responsibility, if someone reasonably tried to be honest (non-self-serving) in making a moral judgment but ended up getting it wrong in a way that was self-serving, he’s not culpable. Rosen would argue that one isn’t culpable for the self-serving tendency because it ultimately stems from something non-culpable. It’s a straightforward, neat argument.

DG
That’s really more like the “basic argument” from the NYT. Rosen’s position is (depends on the claim that), to be morally blameworthy, you must correctly judge an action to be wrong and nonetheless undertake that action. That loses its force if you can be culpable for taking a “convenient” wrong moral judgment. Now Rosen would retreat to his weaker notion of skepticism -- he is not saying that moral responsibility is impossible, just that it’s very hard to confidently identify. But I think the “convenience” of a moral judgment can appear so powerfully that this is not availing.

But yeah you’re eliding Rosen’s position with the blunter form of skepticism sketched in the “basic argument,” probably because you find that argument very persuasive.

Me
I do find it persuasive, and I’m not sure about the subtlety you’re trying to get at. I take it you’re imagining someone who convinces himself of a certain moral position by either (i) somehow intentionally convincing himself of it because it’s self-serving (culpable on the traditional account), (ii) negligently coming to believe it (culpable, albeit less so, on the traditional account), or (iii) reasonably coming to believe it in good faith, though perhaps unconsciously disposed to believe it due to its self-serving nature (not culpable). Are you simply saying that (i) and (ii) are often easy to identify? If so, so what? This just pushes the problem back a step -- is the person really culpable for intentionally or negligently coming to believe the position?

I guess an underlying issue is how much metacognition we should expect of people when making moral judgments. This is, of course, a question that must be resolved by intuition to avoid infinite regress. But we can come up with an answer (e.g., a “reasonable” amount) and practically judge people.

I imagine most people whose moral judgments are both really off base and really self-serving don’t really believe the judgments or are really good at something like self-deception (for which they may or may not be culpable on the traditional account).

All this stuff makes me curious about all these neuroscience-based “models” of cognition that people are working on that really mess with concepts such as belief and intention. A lot of this stuff seems to come down to these seemingly irresolvable (at least with our current tools!) age-old debates, such as whether akrasia is possible, etc. I’m not sure we can get anywhere worthwhile if we start by assuming one side of these questions.

JB
This is a rather thorny problem, I remember discussing it in college. I think Rosen is persuasive and sophisticated on this issue, and basically lays it out in the right way, although I think I come down on the other side of the result.

It doesn’t seem like I have in any of the forwards an explanation of what level of subtlety DG thought you were missing. But what seems missing to me is the virtue-ethical (character trait) component of the problem, which you ignore in this response but mention later when you discuss tendency to self-serving self-deception.

So, assume culpable akrasia. I think it’s much easier to then apply blame to a pervasive, self-serving character trait developed and maintained over time.

Any individual bad act or bad belief or whatever, you can apply the honest mistake argument. But if these are read as blameworthy merely because they supervene on bad character, well I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia. Which is, by hypothesis, culpable.

I think about this in essentially statistical terms. For an individual bad act there’s only one trial, the probability of doing the bad act as an honest mistake even without akrasia is high compared to the probability of doing the bad act as result of akrasia. For the self-serving character trait there are many trials, the probability of maintaining the character trait without culpable akrasia falls to zero as the number of trials increases. Thus the self-serving character trait is culpable, even if you are very willing to give people the benefit of the doubt as to honest mistake (except maybe in children, who’ve had less chance to take responsibility for their character traits over time, fewer trials, thus less justification for attributing their bad acts to culpable A.)

I thought this point of view was right in college, since then I’ve changed my mind about some pretty foundational issues without necessarily following through all of the implications, but it still seems reasonable to me.

Me
Don’t think DG really spelled it out, but it seems like the position you're advocating.

You say: “I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia.” Hmm. I was assuming that once the initial self-serving self-deception “kicks in,” the agent doesn’t reflectively maintain or reinforce it with every judgment; rather, it changes the basis of his specific moral deliberations such that in adhering to it he thinks he’s doing the right thing. In other words, I was assuming at most one “instance” of akrasia -- the self-deceived agent is already deceived and therefore no longer akratic. Is this psychologically realistic? I’m not sure, but I think it’s close. I imagine successful self-deception doesn’t involve much metacognition about the fact! The upshot is that it would be misguided to blame someone for “maintaining” a self-serving moral system that he’s simply come to believe (i.e., I’m not sure how realistic your statistical “model” is).

There remains, of course, the question of how likely it is that akrasia is the basis for an agent coming to believe a moral system that happens to be self-serving. To what extent are people really capable of intentionally sculpting their honest moral beliefs so that they will end up serving themselves whenever they subsequently make good faith moral judgments? I guess this is where Rosen raises an eyebrow. I forget what he says about the psychology of akrasia, but I guess he doubts that people have so much intentional control. This is where I start to wonder about the whole model of straightforward intentional decision-making. How do we really form “foundational” moral beliefs/systems? To what extent to we reflect on them when “applying” them? Can neuroscience help us?

Anyway, I think you raise some provocative points, but I’m inclined to side with Rosen, perhaps simply because of the basic difficulties with conceptualizing akrasia (even though I know a discussion of the traditional account of moral blame must assume its possibility!).

JB
Look, I agree with you, DG and Rosen that it all comes down to your concept of akrasia. I tend to think, for Kantian reasons, that however we construct our concept of free will, it must be in such a way that validates moral responsibility. I.e. that validating moral responsibility is a basic criterion that an account of free will has to satisfy. I understand that it’s hard to reconcile culpable akrasia with determinism or other core concepts (this is what Critique of Pure Reason is about, right? Inasmuch as I understood it). So we could talk about what free will would look like such that it validates akrasia, I think it’s hard not to end up with something that looks rather like religion, but that’s another story.

The other side of the issue is the model of character. Basically the question is your model of “bad character”: is a “bad character” strongly persistent, such that once it kicks in it requires no further culpable actions, or is it not. I think this is actually two issues: First can you get by without considering the issue of whether you are a bad guy, and need to think things through. Second, given that you’ve considered the issue is it realistic to expect to to try to resculpt your character so that you’re not such a bad guy any more.

I think issue one is pretty easy and you sort of brush it over in the opposite direction in a really implausible way. It’s not like you can just, you know, develop a trait of consistent self-deception and then never consider it again, unless you live in total isolation. Culture prompts you quite a lot to question your values and think about integrity. I think this is actually basically what a lot of popular culture is for, and why people are interested in it. Hey, sometimes individuals prompt you, although I’ve come to learn that this is considered rude.

Issue two is harder but look: I don’t think you really believe that it’s impossible for people to resculpt their character. I think you at most really believe either that it’s possible, but sort of too expensive for us to reasonably expect more than a small number of people to do it. Note that this isn’t really a logical or metaethical objection: There’s nothing about the concepts of right and wrong or blaming which says that it has to be easy to do the right thing. But many people have the intuition that if a moral system is so harsh that it cannot be satisfied without extraordinary pain or difficulty or effort then it is not a “moral” system at all (I don’t have this intuition). I don’t know what this issue is called, I think of it as the problem of moral gravity (in the sense of weight. I think I’m thinking of Giles Corey).

Look, neuroscience could tell us that it’s not really possible for people to resculpt their character but I don’t really expect it do do so.

Another dimension which I haven’t addressed arises from moral uncertainty; “right action” or “right character” aren’t known and might not really be knowable for an individual which might make us suspicious of the idea of restructuring our character, or suspicious of people who restructure their character.

PS, I think I don’t believe that people have foundational moral beliefs or systems. Or rather I don’t think that people really rely on them very much when making private moral decisions - I think the function that moral systems serve is more a way of organizing public moral decision-making in society.

Me
Interesting ideas in your first paragraph; seem promising to me. I also more or less agree with your last four paragraphs. But I want to distinguish two things that you seem to mention together in paragraph 4: the requirements of morality and the requirements of blameworthiness. I share your intuition that a correct moral obligation need not be “sufficiently easy” to follow, only possible. But the fact that it’s likely possible (i.e., with the right education, training, effort, etc.) for most sane people with bad characters to restructure their characters doesn’t mean that we are necessarily entitled to blame them for having bad characters; justified blame requires a judgment that someone reasonably should have acted differently. Now that I think about it, this is simply another way of saying that blame depends on how much control and understanding we could reasonably expect of someone (e.g., we'd blame the choleric teenager less than, I don’t know, Lord Russell for the same offense), but the right thing to do doesn’t operate on the same sliding scale.

As for issue one (“can you [realistically] get by without considering the issue of whether you are a bad guy”), I think it comes down to the level at which one’s self-reflection about his moral judgments occurs. For a simplified example, imagine someone who thinks at the levels of (i) what moral “system” (basically, consequentialist or deontological algorithms and heuristics) should I adopt, and (ii) am I reasonably and honestly applying it to a given situation? (I know this isn’t super realistic, but I think anyone who tries to be morally “consistent” necessarily bifurcates his reasoning like this, even if his moral system is incomplete, semi-conscious, and somewhat shifty.) As I touched on in my previous emails, I doubt that much straightforward intentional reasoning is involved in (i); I think it’s largely shaped by temperament and intuition (which, in turn, shape which moral arguments -- essentially intuition pumps -- appeal to a person and shape his moral system). Moreover, people don’t seem to reflect on (i) all that much. So it’s hard for me to see where akrasia is likely to enter the process at the level of (i). Sure, I think it’s inevitable that some self-interest seeps in, and likely with it self-deception, but I doubt whether akrasia is involved in the very act of self-deceiving. Effective self-deception seems to rely on internal mental opacity. And, of course, once someone has truly deceived himself such that he thinks he’s honestly arrived at an acceptable moral system, one is no longer akratic in unreflectively maintaining it.

I see more room for akrasia at the level of (ii). I mean, I can’t help but consider myself akratic in some cases (occasionally eating factory-farm-produced meat, downloading pirated music): I believe I do certain morally bad things because I derive enjoyment from them and am not sufficiently troubled by their badness. But I wonder about other people. And I wonder what neuroscience could tell us about this. I want to reread what Rosen says, because I think this is where the rubber hits the road. I vaguely remember him saying that he doesn’t really perceive himself as akratic, which surprises me. Then again, I’m under the impression -- and I confess to deriving a feeling of superiority from this! -- that most people are much less honestly self-reflective, and more self-deceptive, than I. (Or maybe they’re just better people, though they probably have worse moral systems!)

6 comments:

Alan said...

Haven't checked out this Rosen paper yet.

Alan said...

SG comments, in another forum:

1. Agree w/ JB in thinking that "the right thing to do" need not be easy; cf. my old blog post on probabilistic ought-implies-can. 2. While JB minimal requirements for free will sound sensible it is unlikely that they are satisfied by any non-absurd account thereof. 3. I think Strawson's "basic argument" is extremely damaging to any attempt to do what Alan wants to do, which is have an impersonal moral system on which people with "good intentions" (whatever that means) are objectively not blameworthy etc. In general I think one ought to be pessimistic about this project. This leaves a few ways out -- virtue ethics a la JB or subjectivized blame. I've always found virtue ethics both uncongenial and useless: esp. if, as JB says, formalized ethics is primarily useful for public discourse, I would rather have it be rule-based than character-based as the public are terrible and biased judges of character. This leaves one, as far as I can tell, with an entirely subjective conception of blameworthiness -- or with none at all. I'm not sure these last two positions are different in practice, beyond the semantics of "judging" people vs. "disliking" them.

Alan said...

I respond:

The thing is, society needs something like an akrasia-based conception of moral blame. How else could we effectively have criminal law, etc.? (I mean, your "rule-based," as opposed to "character-based" formalized ethics would still involve judgments of people's character, intentions, etc., right?) I understand that the "basic argument" undermines all of this, but I avoid it because it ultimately relies on determinism/randomness, which we must ignore if we're going to play the blame game at all (or have criminal law). What's interesting about Rosen to me, IIRC (I attended his live lecture twice, but that was in another country and my memory is dying), is that he advocates this "soft" (but still pretty hard in practice) skepticism about moral responsibility that doesn't rely on the "basic argument" (even though it of course has a lot in common with it). What I want to consider is whether Rosen's apparently psychology-based skepticism of akrasia is well-grounded (once again, this all assumes traditional, intuitive, "absurd" free will). Hence JB's and my discussion of character, self-deception, etc.

Alan said...

SG responds:

I'm not convinced that your first statement is true. It seems that one could arrive at something very like the current criminal law (except smarter and more humane!) based purely on the notion of stoat force as a solution to collective-action problems. The law must be sensitive to intention because otherwise punishment wouldn't be predictable whereas deterrents should be predictable. In principle the law ought to judge a minimal, "factual" notion of character to the extent that punishment has a rehabilitative or containing element: e.g., those who probably wouldn't repeat-offend should get short terms; this is as I understand it the basis for the cold-blood/hot-blood distinction. Nevertheless what matters here is not character per se, but the persistence of certain socially harmful dispositions. All of this is blissfully unaffected by the basic argument.

Alan said...

I respond:

I haven't really thought about this (despite my nominal academic background), but I suppose there's a lot of overlap between the considerations that suggest how akratic a perp was and the considerations that suggest how much punishment is appropriate to deter him and others who are similarly situated (ignoring the practical barriers to sending such specific signals via sentencing). I guess the primary issue with your approach is that akrasia-based blaming is ingrained in most of our psyches, and removing it from the punishment equation would remove the proportionality element that people intuitively care about. For example, it seems that your system would lock up (i) a hot-tempered assaulter who has a hard time controlling himself and didn't plan to come to blows for longer than (ii) his calculating counterpart who premeditated a particular beatdown but doesn't seem inclined to go after anyone again. Presumably your system would base its punishments on the facts that perp (i) is more likely to recidivate and also requires a stronger incentive to keep himself in check. But I think most people would want perp (ii) to at least get a punishment "premium" (if not a greater punishment) for being more of an asshole (i.e., more akratic), regardless of the deterrence considerations. But perhaps this desire, like many that are widely held, is something that would ideally be purged from the criminal law (am I basically talking about retribution here, or is this only part of that issue?). So yeah, I think I more or less agree with you, but once again, I'm trying to work within the framework of our intuitive, seemingly inevitable, akrasia-based conception of moral responsibility. I think this conception is intertwined with the conception of free will that we invoke in our everyday lives, and we similarly couldn't do without it in practice. That said, I find your point that "one could arrive at something very like the current criminal law" without it pretty neat; I'm surprised I hadn't come across it before (or maybe I've just forgotten).

Alan said...

SG responds:

I admit that there are such cases but (a) I believe they are cases where we (i.e. you) should revise our (i.e. your) intuitions and (b) I just don't think that there are many cases "arising in nature" where my approach would give radically counterintuitive results. (I imagine I need (b) for (a).) I would object to your i-ii hypothetical on the grounds that ii. is artificial: one-time premeditated assaulters are a strange concept. Besides, the thing about (i) is that he's likely to repeat his offense but impossible to deter; these things go in opposite directions and might cancel out. But all that said I should admit I'm not fond of proportionality, and my position on certain issues like retarded murderers is directly at odds with the "akratic" way and current law. Again, I would claim that these are a small fraction of cases.

I agree that one needs some vague notion of free will in order to function, but it doesn't follow that you have to base your moral system on it. It seems perfectly consistent to ascribe "blame" subjectively -- cf. Dave's torturer example etc. -- and conduct the public-choice end of things along non-akratic lines; you'd have to be less confident of your judgments and strive less for objectivity but that doesn't require you to rewire your brain or anything.