February 22, 2011
From the Outbox -- What Am I, a Fucking Park Ranger?
You know, it's the little things that make The Big Lebowski a true masterpiece, a contender for my favorite overall film, as opposed to just an awesome one. One thing that just occurred to me, dudes, is that Walter was probably under the impression that an actual marmot had invaded the Dude's home. This would explain why Walter, though no park ranger, brings up aquatic rodents and wild animal law.
February 17, 2011
From the Outbox -- If Marijuana Were Legal and Taxed, Would A Large Percentage of People Grow Their Own?
You ask: "how many people will really want to pay commercial prices plus taxes when they can grow their own (thus knowing exactly what they are getting) for next to nothing?"
I answer: lots, even assuming a tax proportional to that on cigarettes. Growing one's own would almost certainly not be the preferred option for most tokers. It's a non-trivial process in terms of space, time (e.g., care and feeding), upkeep (e.g., plant food, avoiding fertilization), and encroachment (e.g., smell). Of course, it's an easier undertaking if one lives in an appropriate climate and can grow outdoors. But it's still nowhere near as easy and efficient as it would be for the Philip Morris of the green economy. Moreover, it seems likely that the product of Big Pot, at least at the high end, would be way better than the homegrown alternatives for a number of reasons. And it would sure be marketed as such.
I mean, maybe it's largely a cultural thing, but most smokers don't roll their own cigarettes even though I believe they could save a considerable amount of money doing so. Don't underestimate the combined forces of laziness and branding. And note that branding -- and regulated selling -- includes a number of substantive signals such as consistency, product data, and the opportunity to support a government cool enough to legalize it.
***
I'm no expert, but my impression is that growing kind bud isn't easy. Maybe pot plants thrive in most climates without much attention, but I think producing a good yield of potent buds requires significant attention and know-how. In today's circumstances, this is of course partially due to the need to avoid detection, but it's also due to the finicky procedures necessary to induce optimal THC production. This has two main implications. First, commercial growers likely could grow significantly better stuff. They'd have better machines, better methods, and better strains. I don't see why these things couldn't be kept proprietary. Second, people love convenience. Maybe you think they're often suckers for it, but that's irrelevant to this discussion. What matters is that many people would pay to avoid having to go through all of the hassle and initial investment of growing -- even people who know they'll be partaking regularly. People are lazy and shortsighted; DIYers are idiosyncratic to varying degrees. I mean, what percentage of people get most of their produce from gardens? What percentage of people even make their own coffee? I think the situation would basically be like homebrewing.
***
I agree that the tax revenue might not be that substantial. I suspect legal pot would be quite cheap in general, but companies might be able to charge a fair amount for designer strains, alternative active ingredient delivery mechanisms, and the like. Regardless, I'd like to get back to the issue of how prevalent homegrowing would be.
You raise some valid points, but I think you're overstating your case. Consider:
I answer: lots, even assuming a tax proportional to that on cigarettes. Growing one's own would almost certainly not be the preferred option for most tokers. It's a non-trivial process in terms of space, time (e.g., care and feeding), upkeep (e.g., plant food, avoiding fertilization), and encroachment (e.g., smell). Of course, it's an easier undertaking if one lives in an appropriate climate and can grow outdoors. But it's still nowhere near as easy and efficient as it would be for the Philip Morris of the green economy. Moreover, it seems likely that the product of Big Pot, at least at the high end, would be way better than the homegrown alternatives for a number of reasons. And it would sure be marketed as such.
I mean, maybe it's largely a cultural thing, but most smokers don't roll their own cigarettes even though I believe they could save a considerable amount of money doing so. Don't underestimate the combined forces of laziness and branding. And note that branding -- and regulated selling -- includes a number of substantive signals such as consistency, product data, and the opportunity to support a government cool enough to legalize it.
***
I'm no expert, but my impression is that growing kind bud isn't easy. Maybe pot plants thrive in most climates without much attention, but I think producing a good yield of potent buds requires significant attention and know-how. In today's circumstances, this is of course partially due to the need to avoid detection, but it's also due to the finicky procedures necessary to induce optimal THC production. This has two main implications. First, commercial growers likely could grow significantly better stuff. They'd have better machines, better methods, and better strains. I don't see why these things couldn't be kept proprietary. Second, people love convenience. Maybe you think they're often suckers for it, but that's irrelevant to this discussion. What matters is that many people would pay to avoid having to go through all of the hassle and initial investment of growing -- even people who know they'll be partaking regularly. People are lazy and shortsighted; DIYers are idiosyncratic to varying degrees. I mean, what percentage of people get most of their produce from gardens? What percentage of people even make their own coffee? I think the situation would basically be like homebrewing.
***
I agree that the tax revenue might not be that substantial. I suspect legal pot would be quite cheap in general, but companies might be able to charge a fair amount for designer strains, alternative active ingredient delivery mechanisms, and the like. Regardless, I'd like to get back to the issue of how prevalent homegrowing would be.
You raise some valid points, but I think you're overstating your case. Consider:
- Moichandising. It seems to me that there would be sizable market segments that would inhale the commercial offerings. Connoisseurs, tourists, first-timers, and "social smokers" come to mind. Connoisseurs would be curious about the latest strains being promoted by the various brands and headshops. Tourists and first-timers obviously would be unlikely to have homegrower connections. And by social smokers I mean the kinds of people who would primarily or exclusively partake in clubs and the like. I mean, just imagine what things would be like if pot were flat-out legal in California. Yeah, you'd have plenty of homegrowers, communes, and underground farmers' markets (green markets?). But you'd also have loads of commercialization.
- The "soft costs" of homegrowing. I've already mentioned the nontrivial time and effort I believe it would take to grow high-end product. Raising, cultivating, farming, gardening -- whatever you want to call it -- is work; it's something that people who are wealthy enough (and who generally have less free time) would not be keen on doing unless they enjoyed it. But I also think many people would psychologically and/or socially be deterred from becoming homegrowers. They may worry about being stigmatized, and even if that's too harsh a word, in the early stages of legalization, there would at least be a lot of skittishness. Lastly, there's the product itself. I don't see why you think corporations wouldn't be able to produce significantly better stuff. They have more resources, and pot appears to have a lot of potential for resource-intensive development. Companies could isolate which combinations of cannabinoids produce which effects and then engage in hybridization and/or genetic modification. They could also presumably enhance other qualities such as aroma, potency, and appearance. And they might be able to breed strains that would produce very efficient yields compared to what's out there. Now, of course, some of this technology would eventually be reverse-engineered or otherwise discovered. But a lot of it could be kept proprietary, and a lot of it simply requires a corporate amount of resources to take advantage of. Additionally, companies could let users know exactly what they're smoking (strain, THC content, cannabinoid profile, etc.) so they can figure out their preferences and take on less risk.
From the Outbox -- Good People and Bad People
We can't really tell what's going on in other people's heads. And there's a (basically useless) sense in which all voluntary actions are done out of self-interest. Nevertheless, I perceive instances in which I do a better or a worse job of controlling my urges to do things I regard as bad. It's possible that my perception of self-control variation is an illusion and that what's really going on is that the urges win out when they're sufficiently strong and/or when my capacity for self-control is sufficiently low. But I just don't think that's how it is with me. I think I rarely actually "lose it" and instead in some sense choose to go apeshit by giving into perverse desires (a combination of self-pity and attention-seeking being a particularly gross example). Now who knows if this is what goes on with other people (though maybe we could increase our confidence through neuroscience and psychology). But I'm pretty sure my internal mental experiences aren't fundamentally idiosyncratic. Of course, people do have different capacities, temperaments, and inclinations, so the point at which someone's being a dick -- i.e., being weak-willed about behaving properly -- varies; external behavior alone isn't sufficient to confidently judge someone. But if I know what it's like to be a dick and also what it's like to resist the urge, then presumably we're all fighting the same internal battle, just on different battlefields. And I doubt we all have the same win percentage. So some of us are worse people than others.
Do some of us get away with it more than others? Absolutely. Such is life. But the closer you get to someone, the easier it is to look into her heart, and the harder it is for her to get away with it.
***
Suppose everyone who comes off as a good person is really motivated ultimately by something selfish (such as wanting to be praised or feeling better about himself because he's doing good), just like everyone who comes off as a bad person. What then? At least there's a real difference, and the more you engage with someone, the better you can tell how they're going to behave and whether you like them. Practically, in the words of Bill Munny, "Deserve's got nothing to do with it."
In other words, the universe may have no moral arc, but we have moral compasses. Your concern is analogous to one's decision whether or not to eat meat even though he "knows" it's wrong. Some rationalize, some say "fuck it," and some stop (to some extent). With questions like this, there's no issue of "hiding" one's badness. It's simply a question of caring enough about doing the right thing, for whatever reason. So yeah, maybe the guy who gets his caring/willpower from a selfish wellspring is no better, in some sense, than the guy who says, "fuck it, meat's too tasty and my friends would give me shit if I gave it up." But in a real sense he is a better person.
In sum:
1. Some people seem to care about being good people; others seem to embrace selfishness.
2. Perhaps, either way, we all ultimately care about feeling good about ourselves, about being able to live with ourselves.
3. One who does good because he feels good about being a good person is still a good person.
Do some of us get away with it more than others? Absolutely. Such is life. But the closer you get to someone, the easier it is to look into her heart, and the harder it is for her to get away with it.
***
Suppose everyone who comes off as a good person is really motivated ultimately by something selfish (such as wanting to be praised or feeling better about himself because he's doing good), just like everyone who comes off as a bad person. What then? At least there's a real difference, and the more you engage with someone, the better you can tell how they're going to behave and whether you like them. Practically, in the words of Bill Munny, "Deserve's got nothing to do with it."
In other words, the universe may have no moral arc, but we have moral compasses. Your concern is analogous to one's decision whether or not to eat meat even though he "knows" it's wrong. Some rationalize, some say "fuck it," and some stop (to some extent). With questions like this, there's no issue of "hiding" one's badness. It's simply a question of caring enough about doing the right thing, for whatever reason. So yeah, maybe the guy who gets his caring/willpower from a selfish wellspring is no better, in some sense, than the guy who says, "fuck it, meat's too tasty and my friends would give me shit if I gave it up." But in a real sense he is a better person.
In sum:
1. Some people seem to care about being good people; others seem to embrace selfishness.
2. Perhaps, either way, we all ultimately care about feeling good about ourselves, about being able to live with ourselves.
3. One who does good because he feels good about being a good person is still a good person.
February 16, 2011
From the Outbox -- Have We Taken Irony Too Far?
I'm cynical, too. I mean, just consider these four facts: (i) it's very hard for a person to turn out decently enough -- a lot of genetic and environmental things have to go right at the right times; (ii) it's very hard to solve countless collective action problems, particularly global ones that require action well in advance of the manifestation of the harms; (iii) we know so little about so many important things, including our ignorance, so our actions are often very risky; and (iv) we as a species have global dominance. So, in a sense, we're "destined" to fuck shit up. We as a species may of course end up thriving if technological progress continues for long enough, but a lot of damage will have been done in the meantime, and a lot of good things will have been irrevocably lost. So, yeah, "everything" turns to shit.
I lack the knowledge to really evaluate David Foster Wallace's claim that, in your words, "we've gone through an irony revolution that has made earnestness and sincerity seem campy and risible," but it does resonate with me. It seems that everyone who's taking something seriously, especially a cause, has to toe a fine line to avoid being accused of self-righteousness, self-importance, overearnestness, heavy-handedness, and/or overzealouness. This may have always been true to some extent -- none of our opposition to these things is purely cultural -- but it does seem to have become more an issue these days. I suppose there are at least two main reasons: (i) the business of modern real-time mass media, which thrives on controversy, gossip, and the like, and often creates or amplifies these things when they don't really exist (e.g., giving every issue two sides, characters, and a traditional narrative, even if it's not an open question and/or is more complicated); and (ii) changes in our culture. I can't really speak to (ii), which of course is intertwined with (i), but I think I know what Wallace is getting at. People tend to be regarded as uncreative, lame, self-indulgent, too personal, and the like when they don't throw in enough self-reflection, self-awareness, and ironic distance from their concerns . It's like people have to put up disclaimers for their passion and sincerity, especially if it's about an issue that "everyone" already "gets." It's like people have to apologize for speaking from the heart, as if everything that comes from the heart is sentimental bullshit. (Perhaps one way of looking at this is that the logic of the anti-emo movement has been overextended. Perhaps another way is that everyone's so afraid of being "called out," so we wrap our basic points in irony, complexity, and qualifiers, which reduces clarity and dilutes our messages -- at least until the time comes when we feel comfortable being straightforward about these messages.) That said, I think we do want this "ironic check" to exist in our society to some extent. Some people should be mocked for the lack of self-awareness and critical distance that comes from never taking the ironic pose. But yeah, we're too inclined to say "Get over yourself!" Some selves aren't worth getting over.
Incidentally, I love that things like The Daily Show exist, because they seem to be able to get away with more passion and sincerity by virtue of being funny and possessing an overarching irony of sorts.
I lack the knowledge to really evaluate David Foster Wallace's claim that, in your words, "we've gone through an irony revolution that has made earnestness and sincerity seem campy and risible," but it does resonate with me. It seems that everyone who's taking something seriously, especially a cause, has to toe a fine line to avoid being accused of self-righteousness, self-importance, overearnestness, heavy-handedness, and/or overzealouness. This may have always been true to some extent -- none of our opposition to these things is purely cultural -- but it does seem to have become more an issue these days. I suppose there are at least two main reasons: (i) the business of modern real-time mass media, which thrives on controversy, gossip, and the like, and often creates or amplifies these things when they don't really exist (e.g., giving every issue two sides, characters, and a traditional narrative, even if it's not an open question and/or is more complicated); and (ii) changes in our culture. I can't really speak to (ii), which of course is intertwined with (i), but I think I know what Wallace is getting at. People tend to be regarded as uncreative, lame, self-indulgent, too personal, and the like when they don't throw in enough self-reflection, self-awareness, and ironic distance from their concerns . It's like people have to put up disclaimers for their passion and sincerity, especially if it's about an issue that "everyone" already "gets." It's like people have to apologize for speaking from the heart, as if everything that comes from the heart is sentimental bullshit. (Perhaps one way of looking at this is that the logic of the anti-emo movement has been overextended. Perhaps another way is that everyone's so afraid of being "called out," so we wrap our basic points in irony, complexity, and qualifiers, which reduces clarity and dilutes our messages -- at least until the time comes when we feel comfortable being straightforward about these messages.) That said, I think we do want this "ironic check" to exist in our society to some extent. Some people should be mocked for the lack of self-awareness and critical distance that comes from never taking the ironic pose. But yeah, we're too inclined to say "Get over yourself!" Some selves aren't worth getting over.
Incidentally, I love that things like The Daily Show exist, because they seem to be able to get away with more passion and sincerity by virtue of being funny and possessing an overarching irony of sorts.
From the Outbox -- On College Rankings
In order to simultaneously post more and remain lazy -- I make no representation that standards have not fallen -- I'm going to periodically post material drawn from my deep well of emailed ramblings, starting, arbitrarily, with the following:
The U.S. News school rankings seem sort of like a natural monopoly to me, in that despite the absence of anti-competitive practices (as far as I know), it's virtually impossible for a competing ranking system to obtain any market share. The vast majority of students understandably take the rankings into account when deciding where to go to school, and they have been doing so for years -- the rankings aren't perfect, but it's certainly better to take them into account than to ignore them. In order for a competing ranking system to achieve any influence, enough students would have to rely on it instead. But what incentive do most students have to take the risk of relying on rankings that most of their peers will likely ignore and that aren't necessarily any better?
For example, suppose U.S. News ranks School A #5 and School B #10, and a critical competitor ranks School B higher on its own list. Could the competitor succeed? Almost certainly not. To begin with, as flawed as the U.S. News rankings are, they do heavily take into account the factors that most college-bound students care about (average SAT score, GPA, class size, etc.). They do a good job of indicating where the most qualified students go to school. Therefore, a credible competitor couldn't have substantially different rankings, such as one with no Ivy League schools in the top 10. Unlike in, say, the market for portable electronic devices, there's only so much innovating a competitor can do. Furthermore, even if the competitor's methodology seems more credible, there is a real risk that it's no better, especially given the great lengths to which schools go to massage the data they report. Finally, there's the fact that most students consult the U.S. News rankings due to their ubiquity and don't even consider the more obscure competition. Thus, most students who get into both School A and School B will go to School A, meaning that School A will continue to get the better students. A student who thinks that School B may, in fact, be better may nevertheless choose to go to School A to be with his more accomplished peers. The U.S. News rankings are therefore self-perpetuating.
Because of the power of the rankings, and the money at stake, schools have worked hard at gaming the system. Once one school successfully games the system, other schools naturally feel pressured to follow suit (kind of like with grade inflation). Then we end up with what we have now: a system that's almost completely gamed. Indeed, it would be better if every school gamed the system in the same way, because then no school would be ahead by virtue of its better gamesmanship. For example, if every school equivalently inflated its employment statistics by creating jobs for its unemployed graduates, no school would have an unfair advantage. Obviously, this isn't the case: some schools have moved way up the rankings due to dishonest practices. The other obvious tragedy is that certain data in the rankings simply can't be trusted.
In sum, we have a classic example of a system that's bad but that everyone understandably relies on anyway. The only potential solution seems to be government intervention of some sort, which of course may be more trouble than it's worth. At least, as this article mentions, there are organizations committed to the public service of exposing these serious frauds.
The U.S. News school rankings seem sort of like a natural monopoly to me, in that despite the absence of anti-competitive practices (as far as I know), it's virtually impossible for a competing ranking system to obtain any market share. The vast majority of students understandably take the rankings into account when deciding where to go to school, and they have been doing so for years -- the rankings aren't perfect, but it's certainly better to take them into account than to ignore them. In order for a competing ranking system to achieve any influence, enough students would have to rely on it instead. But what incentive do most students have to take the risk of relying on rankings that most of their peers will likely ignore and that aren't necessarily any better?
For example, suppose U.S. News ranks School A #5 and School B #10, and a critical competitor ranks School B higher on its own list. Could the competitor succeed? Almost certainly not. To begin with, as flawed as the U.S. News rankings are, they do heavily take into account the factors that most college-bound students care about (average SAT score, GPA, class size, etc.). They do a good job of indicating where the most qualified students go to school. Therefore, a credible competitor couldn't have substantially different rankings, such as one with no Ivy League schools in the top 10. Unlike in, say, the market for portable electronic devices, there's only so much innovating a competitor can do. Furthermore, even if the competitor's methodology seems more credible, there is a real risk that it's no better, especially given the great lengths to which schools go to massage the data they report. Finally, there's the fact that most students consult the U.S. News rankings due to their ubiquity and don't even consider the more obscure competition. Thus, most students who get into both School A and School B will go to School A, meaning that School A will continue to get the better students. A student who thinks that School B may, in fact, be better may nevertheless choose to go to School A to be with his more accomplished peers. The U.S. News rankings are therefore self-perpetuating.
Because of the power of the rankings, and the money at stake, schools have worked hard at gaming the system. Once one school successfully games the system, other schools naturally feel pressured to follow suit (kind of like with grade inflation). Then we end up with what we have now: a system that's almost completely gamed. Indeed, it would be better if every school gamed the system in the same way, because then no school would be ahead by virtue of its better gamesmanship. For example, if every school equivalently inflated its employment statistics by creating jobs for its unemployed graduates, no school would have an unfair advantage. Obviously, this isn't the case: some schools have moved way up the rankings due to dishonest practices. The other obvious tragedy is that certain data in the rankings simply can't be trusted.
In sum, we have a classic example of a system that's bad but that everyone understandably relies on anyway. The only potential solution seems to be government intervention of some sort, which of course may be more trouble than it's worth. At least, as this article mentions, there are organizations committed to the public service of exposing these serious frauds.
February 9, 2011
Review - Let the Right One In
Let the Right One In is a recent vampire movie, but it’s also one of the better melancholy films I’ve seen. It’s set in a small Swedish town that lies in the shadow of the Iron Curtain and under a foot of snow. Oskar, a withdrawn and bullied 12-year-old, lives in a small apartment with his mother, an unconcerned parent. (In a clichéd, but nonetheless poignant, scene, she asks him how he got a cut on his face and unhesitatingly accepts his answer, admonishing him to be more careful on the playground.) Oskar dreams of exacting violent revenge on his tormentors, going so far as to act out scenes with a pocketknife. He also covertly collects newspaper clippings about murders and other grisly incidents. But the most troubling thing about him is his apparent detachment from his predicament. It’s not that he’s underconcerned or perversely content with being bullied; it’s that, outwardly at least, he responds dispassionately, as if he’s come to terms with hopelessness. He doesn’t ever run, cry, or raise his voice, even though he’s clearly suffering and has been engaged in an internal struggle over whether, and if so when, to strike back. Notably, we have every reason to believe that he regards striking back as the ideal. It is not compassion or an aversion to “lowering” himself that stays his hand, but rather something else, such as the inherent gravity of revolutionary action. Perhaps he is also concerned with the physical act of revenge, for he takes up weight training under the tutelage of his amiably eccentric gym teacher.
One night, in the snowfield outside his apartment building, Oskar meets Eli (rhymes with "deli"), who recently moved in next door. She is also 12, in her words, “more or less.” Though Oskar maintains his emotional flatness in her presence -- even after she confirms her sanguinivorous nature -- he is obviously fascinated. He begins, shly, to let her in. He tells her about the bullies, and she advises him to hit back. He questions her willingness to kill, and she points out his. Eli and Oskar are both sympathetic characters in that they are constantly victimized by circumstance, but they are not innocent characters. They are both outsiders, but they must confront the conventional world. Naturally, they are drawn together, certain irreconcilable differences be damned. The subsequent plot developments are largely unremarkable, but their execution is wondrous and moving.
Often, the film successfully resonates by economizing on the explicit, in the same way that a lingering shot of a snowscape can say more about doleful desolation than could even a Roy Batty voiceover. For instance, there are two scenes in which I recall Oskar expressing childlike delight in contrast to his usual aloofness: one where he goes sledding while visiting his dad, and another where he and his mom start brushing their teeth in unison and turn to each other with amusement in their eyes, perhaps engaging in a nightly ritual for the thousandth time. These glimpses of realistic domesticity -- a rare treat in film -- flesh out the viewer’s psychological portrait of Oskar; they suggest that he is not entirely consumed by his bullying and not always so removed from the viscerality of emotion -- that he feels his pleasure and therefore his pain. In another telling domestic scene, a man arrives while Oskar is hanging out at the dinner table with his dad, and the relaxed mood immediately shifts: Oskar stiffens, and his father, invoking the obligation of hospitality, gets the door and pours a couple of drinks. Oskar quickly leaves, and we are left to wonder what prompted his anxiety. His dad comes across as a genial, easygoing man, but is he a horrible drunk? Is the other man his lover? Was Oskar simply crushed by the disruption of his fragile tranquility? Such moments bow my heart strings as much as the bullying, because they convey the tortuous inner life of a tortured child. And like the wordless scenes of delight, they do so in a way that leaves unnecessary details to the richness of imagination.
Let the Right One In also succeeds atmospherically. From the opening fade-in of a gentle flurry, snow plays a prominent role in the film. Fittingly, there is something about snow that complements the vampiric. Perhaps it is that snow signifies short, dark days. Perhaps it is that in its presence we cling bitterly to the very warmth that the vampire craves. Perhaps it is the lifelessness of a whited-out landscape. Perhaps it is simply our association of cold and death. At any rate, the omnipresence of snow contributes to the town’s air of remoteness and repose. There is little vivacity in the dead of winter, and the moonlight, reflected and amplified by the snow, bathes the town in a subdued, blue-tinged glow. Practically, it is hard to argue that a vampire wouldn’t be better off in a big city, where victims are anonymous and hideouts are plentiful, but aesthetically Eli has come to the right place. Overall, the film’s atmosphere is in harmony with its pacing -- the exposition of important plot developments is unhurried, and the viewer is given time and space to reflect on characters’ experiences. Thus, while the film has its share of violent and passionate events, its intensity is never at odds with its contemplative tone.
I am compelled to discuss Let the Right One In’s take on vampirism, because a review of a vampire movie isn’t worth its weight in blood without such a discussion, and also because the film presents a particularly compelling account. Most significantly, Let the Right One In does not glamorize vampirism. Many vampires, particularly the more recent creations, are charismatic creatures who -- bloodthirst and daylight curfew aside -- have transcended their physicality in the manner of ageless superheroes. We are inclined to envy them, especially in this age of blood banks and limitless indoor stimulation. Not so with Eli, who, in the vein of Nosferatu, elicits an uneasy sympathy. She is trapped in a child’s body, so she is limited in her ability to function independently and feed ethically. She possesses preternatural physical prowess, but she is unable to seduce or entrance her victims. When she is underfed, she smells of death. She derives no apparent pleasure from the taking of life. Yet take life she does, wantonly at that, so we cannot comfortably sympathize. She is still childlike in many respects, despite her apparently great age, so perhaps her responsibility is diminished. Indeed, with the selfishness of a child, she has rationalized her murderousness as a need, and is apparently unburdened by the arguable moral imperative to end her life. Moreover, we cannot help but perceive her as a child, for fundamental elements of her identity have been frozen in time. But there is no escaping the fact that she is a serial killer, albeit probably not the first one you’ve rooted for.
Oskar certainly seems to have few qualms about this fact. After all, he is 12 and troubled, and he is in love, or something like it. But so is Eli. At first, she prudently attempts to dissuade his efforts at friendship, but it is clear that she, too, is suffering from loneliness. (For how long has she endured a life of permanent juvenility, perpetual night, forced seclusion, and desensitized murder?) Soon it is not just Oskar who is opening his heart.
It is wonderful to watch their love unfold, but it is also tragic. The romantic endeavors of 12-year-olds do not ordinarily inspire such gravity, but in this case it feels appropriate. Together Oskar and Eli have found a happiness it seems they had never known, but it is a happiness that cannot last. Oskar will become a teenager, then a man, and they will have to let each other go. But all is not lost if they never regret having let each other in.
Rating: 8/10
One night, in the snowfield outside his apartment building, Oskar meets Eli (rhymes with "deli"), who recently moved in next door. She is also 12, in her words, “more or less.” Though Oskar maintains his emotional flatness in her presence -- even after she confirms her sanguinivorous nature -- he is obviously fascinated. He begins, shly, to let her in. He tells her about the bullies, and she advises him to hit back. He questions her willingness to kill, and she points out his. Eli and Oskar are both sympathetic characters in that they are constantly victimized by circumstance, but they are not innocent characters. They are both outsiders, but they must confront the conventional world. Naturally, they are drawn together, certain irreconcilable differences be damned. The subsequent plot developments are largely unremarkable, but their execution is wondrous and moving.
Often, the film successfully resonates by economizing on the explicit, in the same way that a lingering shot of a snowscape can say more about doleful desolation than could even a Roy Batty voiceover. For instance, there are two scenes in which I recall Oskar expressing childlike delight in contrast to his usual aloofness: one where he goes sledding while visiting his dad, and another where he and his mom start brushing their teeth in unison and turn to each other with amusement in their eyes, perhaps engaging in a nightly ritual for the thousandth time. These glimpses of realistic domesticity -- a rare treat in film -- flesh out the viewer’s psychological portrait of Oskar; they suggest that he is not entirely consumed by his bullying and not always so removed from the viscerality of emotion -- that he feels his pleasure and therefore his pain. In another telling domestic scene, a man arrives while Oskar is hanging out at the dinner table with his dad, and the relaxed mood immediately shifts: Oskar stiffens, and his father, invoking the obligation of hospitality, gets the door and pours a couple of drinks. Oskar quickly leaves, and we are left to wonder what prompted his anxiety. His dad comes across as a genial, easygoing man, but is he a horrible drunk? Is the other man his lover? Was Oskar simply crushed by the disruption of his fragile tranquility? Such moments bow my heart strings as much as the bullying, because they convey the tortuous inner life of a tortured child. And like the wordless scenes of delight, they do so in a way that leaves unnecessary details to the richness of imagination.
Let the Right One In also succeeds atmospherically. From the opening fade-in of a gentle flurry, snow plays a prominent role in the film. Fittingly, there is something about snow that complements the vampiric. Perhaps it is that snow signifies short, dark days. Perhaps it is that in its presence we cling bitterly to the very warmth that the vampire craves. Perhaps it is the lifelessness of a whited-out landscape. Perhaps it is simply our association of cold and death. At any rate, the omnipresence of snow contributes to the town’s air of remoteness and repose. There is little vivacity in the dead of winter, and the moonlight, reflected and amplified by the snow, bathes the town in a subdued, blue-tinged glow. Practically, it is hard to argue that a vampire wouldn’t be better off in a big city, where victims are anonymous and hideouts are plentiful, but aesthetically Eli has come to the right place. Overall, the film’s atmosphere is in harmony with its pacing -- the exposition of important plot developments is unhurried, and the viewer is given time and space to reflect on characters’ experiences. Thus, while the film has its share of violent and passionate events, its intensity is never at odds with its contemplative tone.
I am compelled to discuss Let the Right One In’s take on vampirism, because a review of a vampire movie isn’t worth its weight in blood without such a discussion, and also because the film presents a particularly compelling account. Most significantly, Let the Right One In does not glamorize vampirism. Many vampires, particularly the more recent creations, are charismatic creatures who -- bloodthirst and daylight curfew aside -- have transcended their physicality in the manner of ageless superheroes. We are inclined to envy them, especially in this age of blood banks and limitless indoor stimulation. Not so with Eli, who, in the vein of Nosferatu, elicits an uneasy sympathy. She is trapped in a child’s body, so she is limited in her ability to function independently and feed ethically. She possesses preternatural physical prowess, but she is unable to seduce or entrance her victims. When she is underfed, she smells of death. She derives no apparent pleasure from the taking of life. Yet take life she does, wantonly at that, so we cannot comfortably sympathize. She is still childlike in many respects, despite her apparently great age, so perhaps her responsibility is diminished. Indeed, with the selfishness of a child, she has rationalized her murderousness as a need, and is apparently unburdened by the arguable moral imperative to end her life. Moreover, we cannot help but perceive her as a child, for fundamental elements of her identity have been frozen in time. But there is no escaping the fact that she is a serial killer, albeit probably not the first one you’ve rooted for.
Oskar certainly seems to have few qualms about this fact. After all, he is 12 and troubled, and he is in love, or something like it. But so is Eli. At first, she prudently attempts to dissuade his efforts at friendship, but it is clear that she, too, is suffering from loneliness. (For how long has she endured a life of permanent juvenility, perpetual night, forced seclusion, and desensitized murder?) Soon it is not just Oskar who is opening his heart.
It is wonderful to watch their love unfold, but it is also tragic. The romantic endeavors of 12-year-olds do not ordinarily inspire such gravity, but in this case it feels appropriate. Together Oskar and Eli have found a happiness it seems they had never known, but it is a happiness that cannot last. Oskar will become a teenager, then a man, and they will have to let each other go. But all is not lost if they never regret having let each other in.
Rating: 8/10
December 29, 2010
The Life Akratic With Gideon Rosen
I'm posting this discussion here because my blog seems to be a forum conveniens.
DG
Do you think people are inculpable for self-serving, mistaken moral judgments?
Me
I think it depends on why one made the mistake. Do you mean an honest (i.e., trying to do the right thing) mistaken moral judgment that happens to be self-serving? If so, the question seems to be whether the honest mistake was also acceptable and not negligent or the like.
That said, as you know, I don’t think there’s a satisfactory “model” for moral blame. I’m down with the Gideon Rosen stuff (which has a lot in common with the “basic argument” that Strawson outlined in The Stone but does not rely on determinism), which concludes that we should be skeptics about moral responsibility because all allegedly culpable acts presumably stem ultimately from nonculpable ignorance of some sort. Then there are alternatives such as your person-based position, which I haven’t gotten around to really thinking about (I still need to reread that long email you sent awhile back and read a handful of articles before I feel comfortable making any claims about it).
Speaking of reading things, I did very well on this paper I wrote for a philosophy seminar. I remember feeling confused upon rereading it, thinking I got mixed up somewhere, maybe got tautological. But it has at least some merit and is relevant to your question. I’d appreciate your thoughts.
DG
I suppose I think the existence of a self-serving tendency in mistaken judgments undermines the idea of an “honest” mistake; don’t you?
I think the tendency of someone’s mistakes to be convenient is a counterexample to the putative principle that mistaken judgments are non-culpable (which I take to be a premise of Rosen’s view, though I am not that well-versed in his views).
Me
On the traditional notion of moral responsibility, if someone reasonably tried to be honest (non-self-serving) in making a moral judgment but ended up getting it wrong in a way that was self-serving, he’s not culpable. Rosen would argue that one isn’t culpable for the self-serving tendency because it ultimately stems from something non-culpable. It’s a straightforward, neat argument.
DG
That’s really more like the “basic argument” from the NYT. Rosen’s position is (depends on the claim that), to be morally blameworthy, you must correctly judge an action to be wrong and nonetheless undertake that action. That loses its force if you can be culpable for taking a “convenient” wrong moral judgment. Now Rosen would retreat to his weaker notion of skepticism -- he is not saying that moral responsibility is impossible, just that it’s very hard to confidently identify. But I think the “convenience” of a moral judgment can appear so powerfully that this is not availing.
But yeah you’re eliding Rosen’s position with the blunter form of skepticism sketched in the “basic argument,” probably because you find that argument very persuasive.
Me
I do find it persuasive, and I’m not sure about the subtlety you’re trying to get at. I take it you’re imagining someone who convinces himself of a certain moral position by either (i) somehow intentionally convincing himself of it because it’s self-serving (culpable on the traditional account), (ii) negligently coming to believe it (culpable, albeit less so, on the traditional account), or (iii) reasonably coming to believe it in good faith, though perhaps unconsciously disposed to believe it due to its self-serving nature (not culpable). Are you simply saying that (i) and (ii) are often easy to identify? If so, so what? This just pushes the problem back a step -- is the person really culpable for intentionally or negligently coming to believe the position?
I guess an underlying issue is how much metacognition we should expect of people when making moral judgments. This is, of course, a question that must be resolved by intuition to avoid infinite regress. But we can come up with an answer (e.g., a “reasonable” amount) and practically judge people.
I imagine most people whose moral judgments are both really off base and really self-serving don’t really believe the judgments or are really good at something like self-deception (for which they may or may not be culpable on the traditional account).
All this stuff makes me curious about all these neuroscience-based “models” of cognition that people are working on that really mess with concepts such as belief and intention. A lot of this stuff seems to come down to these seemingly irresolvable (at least with our current tools!) age-old debates, such as whether akrasia is possible, etc. I’m not sure we can get anywhere worthwhile if we start by assuming one side of these questions.
JB
This is a rather thorny problem, I remember discussing it in college. I think Rosen is persuasive and sophisticated on this issue, and basically lays it out in the right way, although I think I come down on the other side of the result.
It doesn’t seem like I have in any of the forwards an explanation of what level of subtlety DG thought you were missing. But what seems missing to me is the virtue-ethical (character trait) component of the problem, which you ignore in this response but mention later when you discuss tendency to self-serving self-deception.
So, assume culpable akrasia. I think it’s much easier to then apply blame to a pervasive, self-serving character trait developed and maintained over time.
Any individual bad act or bad belief or whatever, you can apply the honest mistake argument. But if these are read as blameworthy merely because they supervene on bad character, well I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia. Which is, by hypothesis, culpable.
I think about this in essentially statistical terms. For an individual bad act there’s only one trial, the probability of doing the bad act as an honest mistake even without akrasia is high compared to the probability of doing the bad act as result of akrasia. For the self-serving character trait there are many trials, the probability of maintaining the character trait without culpable akrasia falls to zero as the number of trials increases. Thus the self-serving character trait is culpable, even if you are very willing to give people the benefit of the doubt as to honest mistake (except maybe in children, who’ve had less chance to take responsibility for their character traits over time, fewer trials, thus less justification for attributing their bad acts to culpable A.)
I thought this point of view was right in college, since then I’ve changed my mind about some pretty foundational issues without necessarily following through all of the implications, but it still seems reasonable to me.
Me
Don’t think DG really spelled it out, but it seems like the position you're advocating.
You say: “I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia.” Hmm. I was assuming that once the initial self-serving self-deception “kicks in,” the agent doesn’t reflectively maintain or reinforce it with every judgment; rather, it changes the basis of his specific moral deliberations such that in adhering to it he thinks he’s doing the right thing. In other words, I was assuming at most one “instance” of akrasia -- the self-deceived agent is already deceived and therefore no longer akratic. Is this psychologically realistic? I’m not sure, but I think it’s close. I imagine successful self-deception doesn’t involve much metacognition about the fact! The upshot is that it would be misguided to blame someone for “maintaining” a self-serving moral system that he’s simply come to believe (i.e., I’m not sure how realistic your statistical “model” is).
There remains, of course, the question of how likely it is that akrasia is the basis for an agent coming to believe a moral system that happens to be self-serving. To what extent are people really capable of intentionally sculpting their honest moral beliefs so that they will end up serving themselves whenever they subsequently make good faith moral judgments? I guess this is where Rosen raises an eyebrow. I forget what he says about the psychology of akrasia, but I guess he doubts that people have so much intentional control. This is where I start to wonder about the whole model of straightforward intentional decision-making. How do we really form “foundational” moral beliefs/systems? To what extent to we reflect on them when “applying” them? Can neuroscience help us?
Anyway, I think you raise some provocative points, but I’m inclined to side with Rosen, perhaps simply because of the basic difficulties with conceptualizing akrasia (even though I know a discussion of the traditional account of moral blame must assume its possibility!).
JB
Look, I agree with you, DG and Rosen that it all comes down to your concept of akrasia. I tend to think, for Kantian reasons, that however we construct our concept of free will, it must be in such a way that validates moral responsibility. I.e. that validating moral responsibility is a basic criterion that an account of free will has to satisfy. I understand that it’s hard to reconcile culpable akrasia with determinism or other core concepts (this is what Critique of Pure Reason is about, right? Inasmuch as I understood it). So we could talk about what free will would look like such that it validates akrasia, I think it’s hard not to end up with something that looks rather like religion, but that’s another story.
The other side of the issue is the model of character. Basically the question is your model of “bad character”: is a “bad character” strongly persistent, such that once it kicks in it requires no further culpable actions, or is it not. I think this is actually two issues: First can you get by without considering the issue of whether you are a bad guy, and need to think things through. Second, given that you’ve considered the issue is it realistic to expect to to try to resculpt your character so that you’re not such a bad guy any more.
I think issue one is pretty easy and you sort of brush it over in the opposite direction in a really implausible way. It’s not like you can just, you know, develop a trait of consistent self-deception and then never consider it again, unless you live in total isolation. Culture prompts you quite a lot to question your values and think about integrity. I think this is actually basically what a lot of popular culture is for, and why people are interested in it. Hey, sometimes individuals prompt you, although I’ve come to learn that this is considered rude.
Issue two is harder but look: I don’t think you really believe that it’s impossible for people to resculpt their character. I think you at most really believe either that it’s possible, but sort of too expensive for us to reasonably expect more than a small number of people to do it. Note that this isn’t really a logical or metaethical objection: There’s nothing about the concepts of right and wrong or blaming which says that it has to be easy to do the right thing. But many people have the intuition that if a moral system is so harsh that it cannot be satisfied without extraordinary pain or difficulty or effort then it is not a “moral” system at all (I don’t have this intuition). I don’t know what this issue is called, I think of it as the problem of moral gravity (in the sense of weight. I think I’m thinking of Giles Corey).
Look, neuroscience could tell us that it’s not really possible for people to resculpt their character but I don’t really expect it do do so.
Another dimension which I haven’t addressed arises from moral uncertainty; “right action” or “right character” aren’t known and might not really be knowable for an individual which might make us suspicious of the idea of restructuring our character, or suspicious of people who restructure their character.
PS, I think I don’t believe that people have foundational moral beliefs or systems. Or rather I don’t think that people really rely on them very much when making private moral decisions - I think the function that moral systems serve is more a way of organizing public moral decision-making in society.
Me
Interesting ideas in your first paragraph; seem promising to me. I also more or less agree with your last four paragraphs. But I want to distinguish two things that you seem to mention together in paragraph 4: the requirements of morality and the requirements of blameworthiness. I share your intuition that a correct moral obligation need not be “sufficiently easy” to follow, only possible. But the fact that it’s likely possible (i.e., with the right education, training, effort, etc.) for most sane people with bad characters to restructure their characters doesn’t mean that we are necessarily entitled to blame them for having bad characters; justified blame requires a judgment that someone reasonably should have acted differently. Now that I think about it, this is simply another way of saying that blame depends on how much control and understanding we could reasonably expect of someone (e.g., we'd blame the choleric teenager less than, I don’t know, Lord Russell for the same offense), but the right thing to do doesn’t operate on the same sliding scale.
As for issue one (“can you [realistically] get by without considering the issue of whether you are a bad guy”), I think it comes down to the level at which one’s self-reflection about his moral judgments occurs. For a simplified example, imagine someone who thinks at the levels of (i) what moral “system” (basically, consequentialist or deontological algorithms and heuristics) should I adopt, and (ii) am I reasonably and honestly applying it to a given situation? (I know this isn’t super realistic, but I think anyone who tries to be morally “consistent” necessarily bifurcates his reasoning like this, even if his moral system is incomplete, semi-conscious, and somewhat shifty.) As I touched on in my previous emails, I doubt that much straightforward intentional reasoning is involved in (i); I think it’s largely shaped by temperament and intuition (which, in turn, shape which moral arguments -- essentially intuition pumps -- appeal to a person and shape his moral system). Moreover, people don’t seem to reflect on (i) all that much. So it’s hard for me to see where akrasia is likely to enter the process at the level of (i). Sure, I think it’s inevitable that some self-interest seeps in, and likely with it self-deception, but I doubt whether akrasia is involved in the very act of self-deceiving. Effective self-deception seems to rely on internal mental opacity. And, of course, once someone has truly deceived himself such that he thinks he’s honestly arrived at an acceptable moral system, one is no longer akratic in unreflectively maintaining it.
I see more room for akrasia at the level of (ii). I mean, I can’t help but consider myself akratic in some cases (occasionally eating factory-farm-produced meat, downloading pirated music): I believe I do certain morally bad things because I derive enjoyment from them and am not sufficiently troubled by their badness. But I wonder about other people. And I wonder what neuroscience could tell us about this. I want to reread what Rosen says, because I think this is where the rubber hits the road. I vaguely remember him saying that he doesn’t really perceive himself as akratic, which surprises me. Then again, I’m under the impression -- and I confess to deriving a feeling of superiority from this! -- that most people are much less honestly self-reflective, and more self-deceptive, than I. (Or maybe they’re just better people, though they probably have worse moral systems!)
DG
Do you think people are inculpable for self-serving, mistaken moral judgments?
Me
I think it depends on why one made the mistake. Do you mean an honest (i.e., trying to do the right thing) mistaken moral judgment that happens to be self-serving? If so, the question seems to be whether the honest mistake was also acceptable and not negligent or the like.
That said, as you know, I don’t think there’s a satisfactory “model” for moral blame. I’m down with the Gideon Rosen stuff (which has a lot in common with the “basic argument” that Strawson outlined in The Stone but does not rely on determinism), which concludes that we should be skeptics about moral responsibility because all allegedly culpable acts presumably stem ultimately from nonculpable ignorance of some sort. Then there are alternatives such as your person-based position, which I haven’t gotten around to really thinking about (I still need to reread that long email you sent awhile back and read a handful of articles before I feel comfortable making any claims about it).
Speaking of reading things, I did very well on this paper I wrote for a philosophy seminar. I remember feeling confused upon rereading it, thinking I got mixed up somewhere, maybe got tautological. But it has at least some merit and is relevant to your question. I’d appreciate your thoughts.
DG
I suppose I think the existence of a self-serving tendency in mistaken judgments undermines the idea of an “honest” mistake; don’t you?
I think the tendency of someone’s mistakes to be convenient is a counterexample to the putative principle that mistaken judgments are non-culpable (which I take to be a premise of Rosen’s view, though I am not that well-versed in his views).
Me
On the traditional notion of moral responsibility, if someone reasonably tried to be honest (non-self-serving) in making a moral judgment but ended up getting it wrong in a way that was self-serving, he’s not culpable. Rosen would argue that one isn’t culpable for the self-serving tendency because it ultimately stems from something non-culpable. It’s a straightforward, neat argument.
DG
That’s really more like the “basic argument” from the NYT. Rosen’s position is (depends on the claim that), to be morally blameworthy, you must correctly judge an action to be wrong and nonetheless undertake that action. That loses its force if you can be culpable for taking a “convenient” wrong moral judgment. Now Rosen would retreat to his weaker notion of skepticism -- he is not saying that moral responsibility is impossible, just that it’s very hard to confidently identify. But I think the “convenience” of a moral judgment can appear so powerfully that this is not availing.
But yeah you’re eliding Rosen’s position with the blunter form of skepticism sketched in the “basic argument,” probably because you find that argument very persuasive.
Me
I do find it persuasive, and I’m not sure about the subtlety you’re trying to get at. I take it you’re imagining someone who convinces himself of a certain moral position by either (i) somehow intentionally convincing himself of it because it’s self-serving (culpable on the traditional account), (ii) negligently coming to believe it (culpable, albeit less so, on the traditional account), or (iii) reasonably coming to believe it in good faith, though perhaps unconsciously disposed to believe it due to its self-serving nature (not culpable). Are you simply saying that (i) and (ii) are often easy to identify? If so, so what? This just pushes the problem back a step -- is the person really culpable for intentionally or negligently coming to believe the position?
I guess an underlying issue is how much metacognition we should expect of people when making moral judgments. This is, of course, a question that must be resolved by intuition to avoid infinite regress. But we can come up with an answer (e.g., a “reasonable” amount) and practically judge people.
I imagine most people whose moral judgments are both really off base and really self-serving don’t really believe the judgments or are really good at something like self-deception (for which they may or may not be culpable on the traditional account).
All this stuff makes me curious about all these neuroscience-based “models” of cognition that people are working on that really mess with concepts such as belief and intention. A lot of this stuff seems to come down to these seemingly irresolvable (at least with our current tools!) age-old debates, such as whether akrasia is possible, etc. I’m not sure we can get anywhere worthwhile if we start by assuming one side of these questions.
JB
This is a rather thorny problem, I remember discussing it in college. I think Rosen is persuasive and sophisticated on this issue, and basically lays it out in the right way, although I think I come down on the other side of the result.
It doesn’t seem like I have in any of the forwards an explanation of what level of subtlety DG thought you were missing. But what seems missing to me is the virtue-ethical (character trait) component of the problem, which you ignore in this response but mention later when you discuss tendency to self-serving self-deception.
So, assume culpable akrasia. I think it’s much easier to then apply blame to a pervasive, self-serving character trait developed and maintained over time.
Any individual bad act or bad belief or whatever, you can apply the honest mistake argument. But if these are read as blameworthy merely because they supervene on bad character, well I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia. Which is, by hypothesis, culpable.
I think about this in essentially statistical terms. For an individual bad act there’s only one trial, the probability of doing the bad act as an honest mistake even without akrasia is high compared to the probability of doing the bad act as result of akrasia. For the self-serving character trait there are many trials, the probability of maintaining the character trait without culpable akrasia falls to zero as the number of trials increases. Thus the self-serving character trait is culpable, even if you are very willing to give people the benefit of the doubt as to honest mistake (except maybe in children, who’ve had less chance to take responsibility for their character traits over time, fewer trials, thus less justification for attributing their bad acts to culpable A.)
I thought this point of view was right in college, since then I’ve changed my mind about some pretty foundational issues without necessarily following through all of the implications, but it still seems reasonable to me.
Me
Don’t think DG really spelled it out, but it seems like the position you're advocating.
You say: “I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia.” Hmm. I was assuming that once the initial self-serving self-deception “kicks in,” the agent doesn’t reflectively maintain or reinforce it with every judgment; rather, it changes the basis of his specific moral deliberations such that in adhering to it he thinks he’s doing the right thing. In other words, I was assuming at most one “instance” of akrasia -- the self-deceived agent is already deceived and therefore no longer akratic. Is this psychologically realistic? I’m not sure, but I think it’s close. I imagine successful self-deception doesn’t involve much metacognition about the fact! The upshot is that it would be misguided to blame someone for “maintaining” a self-serving moral system that he’s simply come to believe (i.e., I’m not sure how realistic your statistical “model” is).
There remains, of course, the question of how likely it is that akrasia is the basis for an agent coming to believe a moral system that happens to be self-serving. To what extent are people really capable of intentionally sculpting their honest moral beliefs so that they will end up serving themselves whenever they subsequently make good faith moral judgments? I guess this is where Rosen raises an eyebrow. I forget what he says about the psychology of akrasia, but I guess he doubts that people have so much intentional control. This is where I start to wonder about the whole model of straightforward intentional decision-making. How do we really form “foundational” moral beliefs/systems? To what extent to we reflect on them when “applying” them? Can neuroscience help us?
Anyway, I think you raise some provocative points, but I’m inclined to side with Rosen, perhaps simply because of the basic difficulties with conceptualizing akrasia (even though I know a discussion of the traditional account of moral blame must assume its possibility!).
JB
Look, I agree with you, DG and Rosen that it all comes down to your concept of akrasia. I tend to think, for Kantian reasons, that however we construct our concept of free will, it must be in such a way that validates moral responsibility. I.e. that validating moral responsibility is a basic criterion that an account of free will has to satisfy. I understand that it’s hard to reconcile culpable akrasia with determinism or other core concepts (this is what Critique of Pure Reason is about, right? Inasmuch as I understood it). So we could talk about what free will would look like such that it validates akrasia, I think it’s hard not to end up with something that looks rather like religion, but that’s another story.
The other side of the issue is the model of character. Basically the question is your model of “bad character”: is a “bad character” strongly persistent, such that once it kicks in it requires no further culpable actions, or is it not. I think this is actually two issues: First can you get by without considering the issue of whether you are a bad guy, and need to think things through. Second, given that you’ve considered the issue is it realistic to expect to to try to resculpt your character so that you’re not such a bad guy any more.
I think issue one is pretty easy and you sort of brush it over in the opposite direction in a really implausible way. It’s not like you can just, you know, develop a trait of consistent self-deception and then never consider it again, unless you live in total isolation. Culture prompts you quite a lot to question your values and think about integrity. I think this is actually basically what a lot of popular culture is for, and why people are interested in it. Hey, sometimes individuals prompt you, although I’ve come to learn that this is considered rude.
Issue two is harder but look: I don’t think you really believe that it’s impossible for people to resculpt their character. I think you at most really believe either that it’s possible, but sort of too expensive for us to reasonably expect more than a small number of people to do it. Note that this isn’t really a logical or metaethical objection: There’s nothing about the concepts of right and wrong or blaming which says that it has to be easy to do the right thing. But many people have the intuition that if a moral system is so harsh that it cannot be satisfied without extraordinary pain or difficulty or effort then it is not a “moral” system at all (I don’t have this intuition). I don’t know what this issue is called, I think of it as the problem of moral gravity (in the sense of weight. I think I’m thinking of Giles Corey).
Look, neuroscience could tell us that it’s not really possible for people to resculpt their character but I don’t really expect it do do so.
Another dimension which I haven’t addressed arises from moral uncertainty; “right action” or “right character” aren’t known and might not really be knowable for an individual which might make us suspicious of the idea of restructuring our character, or suspicious of people who restructure their character.
PS, I think I don’t believe that people have foundational moral beliefs or systems. Or rather I don’t think that people really rely on them very much when making private moral decisions - I think the function that moral systems serve is more a way of organizing public moral decision-making in society.
Me
Interesting ideas in your first paragraph; seem promising to me. I also more or less agree with your last four paragraphs. But I want to distinguish two things that you seem to mention together in paragraph 4: the requirements of morality and the requirements of blameworthiness. I share your intuition that a correct moral obligation need not be “sufficiently easy” to follow, only possible. But the fact that it’s likely possible (i.e., with the right education, training, effort, etc.) for most sane people with bad characters to restructure their characters doesn’t mean that we are necessarily entitled to blame them for having bad characters; justified blame requires a judgment that someone reasonably should have acted differently. Now that I think about it, this is simply another way of saying that blame depends on how much control and understanding we could reasonably expect of someone (e.g., we'd blame the choleric teenager less than, I don’t know, Lord Russell for the same offense), but the right thing to do doesn’t operate on the same sliding scale.
As for issue one (“can you [realistically] get by without considering the issue of whether you are a bad guy”), I think it comes down to the level at which one’s self-reflection about his moral judgments occurs. For a simplified example, imagine someone who thinks at the levels of (i) what moral “system” (basically, consequentialist or deontological algorithms and heuristics) should I adopt, and (ii) am I reasonably and honestly applying it to a given situation? (I know this isn’t super realistic, but I think anyone who tries to be morally “consistent” necessarily bifurcates his reasoning like this, even if his moral system is incomplete, semi-conscious, and somewhat shifty.) As I touched on in my previous emails, I doubt that much straightforward intentional reasoning is involved in (i); I think it’s largely shaped by temperament and intuition (which, in turn, shape which moral arguments -- essentially intuition pumps -- appeal to a person and shape his moral system). Moreover, people don’t seem to reflect on (i) all that much. So it’s hard for me to see where akrasia is likely to enter the process at the level of (i). Sure, I think it’s inevitable that some self-interest seeps in, and likely with it self-deception, but I doubt whether akrasia is involved in the very act of self-deceiving. Effective self-deception seems to rely on internal mental opacity. And, of course, once someone has truly deceived himself such that he thinks he’s honestly arrived at an acceptable moral system, one is no longer akratic in unreflectively maintaining it.
I see more room for akrasia at the level of (ii). I mean, I can’t help but consider myself akratic in some cases (occasionally eating factory-farm-produced meat, downloading pirated music): I believe I do certain morally bad things because I derive enjoyment from them and am not sufficiently troubled by their badness. But I wonder about other people. And I wonder what neuroscience could tell us about this. I want to reread what Rosen says, because I think this is where the rubber hits the road. I vaguely remember him saying that he doesn’t really perceive himself as akratic, which surprises me. Then again, I’m under the impression -- and I confess to deriving a feeling of superiority from this! -- that most people are much less honestly self-reflective, and more self-deceptive, than I. (Or maybe they’re just better people, though they probably have worse moral systems!)
August 26, 2010
On "Chimp Brain"
A friend of mine believes that the desire for recognition and admiration is generally something to be overcome, not acted on. He maintains that this desire is a detrimental vestige of our simian ancestry, a maladaptive tendency in a world in which generalized status-seeking is only worthwhile for aspiring politicians, celebrities, and the like. In other words, people like me should stop thinking with their "chimp brains" and should instead focus on attaining more substantive returns such as knowledge about an interesting subject, better financial discipline, or the esteem of a few close friends. (Or we should become aspiring politicians, celebrities, and the like.) For instance, I shouldn't care if someone is wrong on the internet, except insofar as it shapes my position on an issue worth taking a position on.
As a blogger (someone is right on the internet!), simiophile, and all-around highly competitive person, this view ruffled my feathers. I wondered whether I'm indeed unduly concerned with what an unduly broad group of people think of me -- a group that surely includes some people who, taking after Howard Roark, don't think of me. After all, I was basically serious when, in my first post (on why I'm blogging), I wrote: "I want to show off. (It’s okay now that I admit it, right?) I want you to think I’m even more insightful, funny, interesting, reasonable, and infallible."
On reflection, I agree with my friend that I would be better off if my chimp brain were less active. Although I believe that most activities and interactions are inevitably competitive and relevant to one's status (think of, say, any conversation in which you were striving to be funny, smart, and/or sociable, even if you weren't consciously trying to outperform your friends), I would like to approach them in a less competitive and status-seeking manner. I would also like to devote more time and energy towards activities that provide me with non-status-based rewards (e.g., reading up on issues instead of blogging about them, assuming blogging even advances my status). But these things are easier said than done, and it's not clear to me what the optimal balance is -- competitiveness and status-seeking are not inherently bad things.
That said, I want to endeavor to act more in accordance with the higher parts of my brain. For one, I want to pick my intellectual battles more wisely. I've always been reluctant to end an argument by "agreeing to disagree," because I believe that the vast majority of disagreements between reasonable people are not the result of differences in values, of which true impasses are made. Rather, I think that given enough effort and patience, reasonable people can pin down and work out the empirical and/or logical differences that underlie their disagreements. But putting in -- and demanding -- such effort and patience is not always worth it; it depends on the importance of the issue in question and the characteristics of the parties, and it risks breeding animosity. Accordingly, I want to keep in mind that agreeing to disagree does not necessarily entail writing off one's interlocutor as unreasonable, irrational, or both (except on an internet forum) -- it can simply be the result of the mature recognition that the truth is not worth pursuing at all costs.
A second practical example of the more elevated thinking to which I aspire is, frankly, having more reasonable expectations about the amount of attention I can get by demanding it. To quote my initial post again, I wrote that "I'm always happy to devote some time to the works of friends; there's something markedly more interesting about the products of minds with which I am familiar." (Naturally, I made this statement in the context of blegging for readers.) Perhaps this is a common sentiment, but I feel it's particularly strong in me. For example, I would be eager to look at a friend's paintings or listen to a friend's music, even if I didn't expect them to be dripping with artistic merit (feel free to call me on this). Indeed, I feel compelled to read my friends' blogs (and, until a recent bout of sensibility, Google Reader feeds) in their entirety, even if not every post is my cup of tea. On the other hand, most people I know are much more selective in their attentions. They're willing to give my creations and recommendations some precedence, but they're more willing to just pursue their interests. Ultimately, I shouldn't expect others to share my interests so closely. People, no matter how compatible, are inescapably separated by myriad differences in genes and environment. And we're all full of foibles. Healthy relationships of all kinds thus involve tolerance, humility, and sacrifices. This, too, I will keep in mind.
In light of the above, this will probably be my last post. Thanks for reading.
As a blogger (someone is right on the internet!), simiophile, and all-around highly competitive person, this view ruffled my feathers. I wondered whether I'm indeed unduly concerned with what an unduly broad group of people think of me -- a group that surely includes some people who, taking after Howard Roark, don't think of me. After all, I was basically serious when, in my first post (on why I'm blogging), I wrote: "I want to show off. (It’s okay now that I admit it, right?) I want you to think I’m even more insightful, funny, interesting, reasonable, and infallible."
On reflection, I agree with my friend that I would be better off if my chimp brain were less active. Although I believe that most activities and interactions are inevitably competitive and relevant to one's status (think of, say, any conversation in which you were striving to be funny, smart, and/or sociable, even if you weren't consciously trying to outperform your friends), I would like to approach them in a less competitive and status-seeking manner. I would also like to devote more time and energy towards activities that provide me with non-status-based rewards (e.g., reading up on issues instead of blogging about them, assuming blogging even advances my status). But these things are easier said than done, and it's not clear to me what the optimal balance is -- competitiveness and status-seeking are not inherently bad things.
That said, I want to endeavor to act more in accordance with the higher parts of my brain. For one, I want to pick my intellectual battles more wisely. I've always been reluctant to end an argument by "agreeing to disagree," because I believe that the vast majority of disagreements between reasonable people are not the result of differences in values, of which true impasses are made. Rather, I think that given enough effort and patience, reasonable people can pin down and work out the empirical and/or logical differences that underlie their disagreements. But putting in -- and demanding -- such effort and patience is not always worth it; it depends on the importance of the issue in question and the characteristics of the parties, and it risks breeding animosity. Accordingly, I want to keep in mind that agreeing to disagree does not necessarily entail writing off one's interlocutor as unreasonable, irrational, or both (except on an internet forum) -- it can simply be the result of the mature recognition that the truth is not worth pursuing at all costs.
A second practical example of the more elevated thinking to which I aspire is, frankly, having more reasonable expectations about the amount of attention I can get by demanding it. To quote my initial post again, I wrote that "I'm always happy to devote some time to the works of friends; there's something markedly more interesting about the products of minds with which I am familiar." (Naturally, I made this statement in the context of blegging for readers.) Perhaps this is a common sentiment, but I feel it's particularly strong in me. For example, I would be eager to look at a friend's paintings or listen to a friend's music, even if I didn't expect them to be dripping with artistic merit (feel free to call me on this). Indeed, I feel compelled to read my friends' blogs (and, until a recent bout of sensibility, Google Reader feeds) in their entirety, even if not every post is my cup of tea. On the other hand, most people I know are much more selective in their attentions. They're willing to give my creations and recommendations some precedence, but they're more willing to just pursue their interests. Ultimately, I shouldn't expect others to share my interests so closely. People, no matter how compatible, are inescapably separated by myriad differences in genes and environment. And we're all full of foibles. Healthy relationships of all kinds thus involve tolerance, humility, and sacrifices. This, too, I will keep in mind.
In light of the above, this will probably be my last post. Thanks for reading.
August 24, 2010
In the Backseat
You may have come across this Pulitzer Prize-winning article about caring parents who carelessly leave their babies to die in their hot cars. The article rekindled my anger at the moralizing masses (likely the same people who make it impossible for state legislatures and prison wardens to end the counterproductive, torturous, and widespread practice of long-term solitary confinement) and sparked the following rants, culled from a couple of emails I wrote.
Most people's reactions to these cases ("frothing vitriol" in the author's words) -- like most people and their reactions to most bad things -- are unreasonable and disgusting. People need to be taught to reason about emotional issues. Why don't schools teach subjects such as personal finance and practical psychology (which, of course, has implications for personal finance)? I've long believed that understanding one's limitations is a significant step in freeing oneself from them. For example, I've fortunately always been disinclined to make the fundamental attribution error, but learning about it (in high school) -- about how demonstrably flawed most people's judgments are -- really hammered the point home. Unfortunately my psychology teacher didn't emphasize that the experiments we studied are revealing about how we are inclined to think and act in the real world. Although this observation is obvious to us -- it's the whole idea of experimental psychology -- that doesn't mean it shouldn't be underscored in the classroom. A little preaching can be a good thing.
Also, we should have trained, vetted, professional jurors.
***
Basically, it's essential to think about one's own thinking -- to metacogitate -- and to not just react like so many people do. I think the main reason why we prosecute 60% of the parents who unintentionally leave their babies to die in their cars is because people think they could never do something like that, that it's something only a monster or a reckless person could do. That's not true, and prosecuting the parents is counterproductive -- it costs society resources that could be used to prosecute real criminals; it further ruins the lives of these parents and the rest of their families (including any other kids they have to care for); and it encourages a moralistic, as opposed to a practical, justice system. People often talk about being willing to leave certain matters "in God's hands." Well, this is precisely the kind of situation where the human justice system should lay off.
Most people's reactions to these cases ("frothing vitriol" in the author's words) -- like most people and their reactions to most bad things -- are unreasonable and disgusting. People need to be taught to reason about emotional issues. Why don't schools teach subjects such as personal finance and practical psychology (which, of course, has implications for personal finance)? I've long believed that understanding one's limitations is a significant step in freeing oneself from them. For example, I've fortunately always been disinclined to make the fundamental attribution error, but learning about it (in high school) -- about how demonstrably flawed most people's judgments are -- really hammered the point home. Unfortunately my psychology teacher didn't emphasize that the experiments we studied are revealing about how we are inclined to think and act in the real world. Although this observation is obvious to us -- it's the whole idea of experimental psychology -- that doesn't mean it shouldn't be underscored in the classroom. A little preaching can be a good thing.
Also, we should have trained, vetted, professional jurors.
***
Basically, it's essential to think about one's own thinking -- to metacogitate -- and to not just react like so many people do. I think the main reason why we prosecute 60% of the parents who unintentionally leave their babies to die in their cars is because people think they could never do something like that, that it's something only a monster or a reckless person could do. That's not true, and prosecuting the parents is counterproductive -- it costs society resources that could be used to prosecute real criminals; it further ruins the lives of these parents and the rest of their families (including any other kids they have to care for); and it encourages a moralistic, as opposed to a practical, justice system. People often talk about being willing to leave certain matters "in God's hands." Well, this is precisely the kind of situation where the human justice system should lay off.
August 12, 2010
The Economic Justifications for Government Support of Technological Advancement
The following is culled from a paper I wrote (footnotes omitted):
The development and deployment of technologies for combating climate change should not be left to the private sector alone, even if governments were to take the essential step of pricing the externality of greenhouse gas emissions. Economics demonstrates that implementing a carbon tax or an emissions permit trading system is the most cost-effective method of achieving the indispensable goal of inducing private actors to factor the social cost of emissions into their decisions. Instituting such a policy is the single most significant step that governments can take to mitigate climate change. But it is a necessary step, not a sufficient one. For economics also demonstrates that the technology sector is plagued by its own set of market failures, which entail that emissions pricing alone will not give firms the optimal incentive to develop and deploy technologies for producing cleaner energy. In turn, the marginal cost of achieving a given unit of emissions reduction will be higher than is ideal. The public sector must intervene in order to ensure the efficient level of technological investment. As Adam B. Jaffe, Richard G. Newell, and Robert N. Stavins aptly recapitulate, we should
The first subsection below explicates the failures in the R&D market that justify public support. The second subsection deals specifically with the related, but distinct, set of market failures that impede the private deployment and diffusion of clean energy technology.
Failures in the Market for Research and Development
The primary failure in the R&D market is that technological innovation creates positive externalities in the form of “knowledge spillovers,” so the market produces too little innovation. R&D generates knowledge that has the characteristics of a public good: one individual’s consumption of the good does not reduce the amount of the good available for consumption by others, and no one can effectively be excluded from using the good. Because firms cannot capture all of the benefits of R&D, they have socially suboptimal incentives to engage in it. Some studies of commercial innovation have concluded that, on average, originators only appropriate about half of the gains from R&D. Hence the value of government intervention in the market.
A second basis for intervention is that most of the benefits of climate change mitigation are so long-term as to be outside the planning horizons of private funding instruments. Private firms are obligated to focus on private costs, benefits, and discount rates in order to satisfy their shareholders. This can result in insufficient emphasis on returns, however valuable, that will not materialize until far into the future, when many shareholders have reached the Keynesian long-run. Moreover, these issues are compounded by the considerable uncertainty – and perceived uncertainty – about climate change, which renders long-term returns impossible to precisely quantify. Several studies have found that the “implicit discount rates” that firms use when making decisions about investment in long-term climate change mitigation are frequently much higher than market interest rates due to various market barriers and failures, such as inadequate information. Firms are generally suboptimally aware of energy conservation opportunities and often lack the expertise necessary to implement them – another instance of the underprovision of the public good of knowledge.
A final impediment to R&D funding is the asymmetry of information between innovators and potential investors about the prospects of new technologies. Innovators tend to be in a better position to assess the potential of their work, so favorable assessments are usually met with skepticism and demands for greater risk premiums. This intensifies the knowledge spillover problem because subsequent producers of a successful technology will be able to obtain financing on better terms. All of the above difficulties are surely magnified in today’s credit-constrained market, preventing even more of the up-front funding that R&D requires.
One means of addressing the mismatch of private and social returns is the enforcement of intellectual property rights. For example, patents grant innovators temporary monopolies on their innovations, which they can use to charge monopoly prices and thereby possibly recoup their share of the full social value of their innovations. In the absence of such protections, creators who market new technologies will likely soon face competition from others who take advantage of the technologies’ public availability and produce their own versions. Although the original developer may have a head start in marketing her development and may be able to command a greater market share due to her status as originator, these market-based incentives are tenuous, ephemeral, and uncertain. For instance, it is equally likely in theory that an imitator will be able to produce the good or service more cheaply or at a higher quality. It is also likely that consumers will anticipate the entrance of such competitors and therefore refrain from consumption during the innovator’s initial marketing. Thus economic analysis supports the use of intellectual property rights to ensure that inventors will be adequately driven by the profit motive. Patents are certainly a key component of any pro-innovation policy scheme.
However, intellectual property rights are neither a sufficient nor always desirable response to the failures of the R&D market. To begin with, much of the value of a given R&D project may consist of knowledge that is, for good reason, outside the scope of the intellectual property rights regime. The Stern Review offers the example of “tacit knowledge,” which is vague and does not satisfy the requirements of patentability. Another concern is that due to the inherent uncertainty of legal regulation, patents and the like do not always preclude all of the competition that prevents innovators from reaping their full rewards. Additionally, the prospect of monopoly pricing may not be a sufficient incentive to encourage risky, large-scale basic research. Analogously, pharmaceutical companies are known not to develop treatments for diseases that affect a sufficiently small segment of the population. As for the undesirability of robust intellectual property rights protection in the R&D context, one downside is that it can hinder or even cripple progress by preventing firms from building on each other’s successes and learning from each other’s failures. The early stages of innovation are often characterized by a high degree of uncertainty due to the lack of a well-defined path to progress. When there are multiple R&D avenues worth exploring, it pays to have multiple firms collaborating. Industry-wide collaboration is also vital for achieving big breakthroughs in basic science, such as those necessary to commercialize hydrogen fuel cell automobiles, which a single company is ill-equipped to deliver. This cooperation is unlikely to materialize if individual firms are given the incentive to keep their efforts under wraps while hoping to free-ride on the work of others.
In light of these concerns, many economists advocate direct government subsidization of technological innovation, especially at the level of R&D. This funding can take several forms, such as government-performed research, government contracts, grants, tax breaks, technology prizes, and incentives for students to study science and engineering. Of course, the government and private organizations can tailor these options to meet their needs in a given situation. Economists also promote government efforts to remedy the excessive myopia and uncertainty with which private decisions about long-term investment in climate change mitigation are fraught. These programs take a variety of forms, including educational workshops and training programs for professionals, advertising, product labeling, and energy audits of manufacturing plants.
Failures in the Market for Deployment and Diffusion
Although economists more strongly and consistently back government support for R&D, many also call for government responses to imperfections in the markets for technology deployment and diffusion. Successful R&D does not guarantee that the resulting innovation will immediately be deployed; market forces also govern firms’ decisions about technological implementation.
Economists have identified that firms in an industry may face a collective action problem when deciding whether to adopt new technologies that exhibit “dynamic increasing returns.” This phenomenon exists when the value of a technology to one user depends on how many other users have adopted the technology; in other words, the more users there are, the better off they will be. There are three types of positive externalities that generate dynamic increasing returns: “learning-by-using,” “learning-by-doing,” and “network externalities.” As in the R&D context, these externalities may give rise to a collective action problem because each firm can partake of the public fruits produced by the first adopters of a new technology; in turn, each firm is disinclined to be at the vanguard of technological adoption, and the deployment of worthwhile technology is unproductively delayed. Learning-by-using refers to the fact that the initial users of an innovation generate valuable public information about it, such as its existence, characteristics, and performance. Learning-by-doing, the “supply-side counterpart,” refers to the fact that production costs fall as firms gain experience, which they cannot fully keep to themselves. For one, a product itself usually provides insight into its production. Lastly, network externalities exist when the value of an innovation increases as others adopt a compatible product. Telephone and computer networks are obvious examples.
In addition to these externalities, certain characteristics of the power generation sector further deter and postpone the deployment of technologies that are expensive and commercially unproven. The sector is subject to a high degree of regulation and tends to be quite risk averse. Second, technologies that do not easily fit into existing infrastructures such as power grids and gas stations are unlikely to enter the market until demand rises and/or costs fall enough for the industry to act en masse. For example, national grids are usually designed with central, as opposed to distributed, power plants in mind, and CCS will require the construction of new pipelines. Third, there are market distortions such as the aforementioned fossil fuel subsidies, which make it even harder for new technologies to compete. Finally, energy markets tend not to be particularly competitive. The oil-market is a well-known oligopoly, dominated by a multinational cartel, and electricity generation is a natural monopoly, in that a single firm can provide power at the lowest social cost due to economies of scale.
These problems can be mitigated or eliminated if there exists a niche market that is willing to pay a high price for early access to an innovation, as was the case with the first mobile phones. Niche markets can enable the initial producer of an advanced technology to profit despite subsequently facing competition from other firms that have the benefit of following in its footsteps and taking advantage of the externalities described above. Otherwise, originators must hope that they can eventually turn a profit – for instance by initially selling at a loss and then maintaining a dominant market share as costs fall and the market expands, perhaps by virtue of customer goodwill. Neither of these scenarios is likely in the energy market due to the homogenous nature of the end product (e.g., electricity), which makes it difficult for innovators to distinguish themselves. Although niche markets for carbon-free electricity exist, they are too small to make the costly implementation of advanced technology worthwhile. Consequently, established technologies can become locked-in, progressing only incrementally. At worst, the Stern Review notes that “energy generation technologies can fall into a ‘valley of death’, where despite a concept being shown to work and have long-term profit potential they fail to find a market.”
There are several desirable public policy responses that can be tailored to different problems in different markets. As in the R&D context, various types of subsidies and information programs can counteract market failures. The government can also use energy efficiency standards, such as emissions quotas, to force firms in an industry to implement environmentally friendly technologies.
The development and deployment of technologies for combating climate change should not be left to the private sector alone, even if governments were to take the essential step of pricing the externality of greenhouse gas emissions. Economics demonstrates that implementing a carbon tax or an emissions permit trading system is the most cost-effective method of achieving the indispensable goal of inducing private actors to factor the social cost of emissions into their decisions. Instituting such a policy is the single most significant step that governments can take to mitigate climate change. But it is a necessary step, not a sufficient one. For economics also demonstrates that the technology sector is plagued by its own set of market failures, which entail that emissions pricing alone will not give firms the optimal incentive to develop and deploy technologies for producing cleaner energy. In turn, the marginal cost of achieving a given unit of emissions reduction will be higher than is ideal. The public sector must intervene in order to ensure the efficient level of technological investment. As Adam B. Jaffe, Richard G. Newell, and Robert N. Stavins aptly recapitulate, we should
The importance of factoring technological change into an analysis of the cost of abating greenhouse gas emissions should not be underestimated. As explained previously, the development of new technologies, the commercialization of viable innovations, and the employment of readily available advancements make up the world’s toolkit for enabling the benefits of ravenous energy consumption not to come at the cost of our planet and our future. The aforementioned authors note that “the single largest source of difference among modelers’ predictions of the cost of climate policy is often differences in assumptions about the future rate and direction of technological change.” Good technology policy should render these assumptions more favorable, thereby lowering the expected cost of emissions abatement.view technological change relative to the environment as occurring at the nexus of two distinct and important market failures: pollution represents a negative externality, and new technology generates positive externalities. Hence, in the absence of public policy, new technology for pollution reduction is, from an analytical perspective, doubly underprovided by markets.
The first subsection below explicates the failures in the R&D market that justify public support. The second subsection deals specifically with the related, but distinct, set of market failures that impede the private deployment and diffusion of clean energy technology.
Failures in the Market for Research and Development
The primary failure in the R&D market is that technological innovation creates positive externalities in the form of “knowledge spillovers,” so the market produces too little innovation. R&D generates knowledge that has the characteristics of a public good: one individual’s consumption of the good does not reduce the amount of the good available for consumption by others, and no one can effectively be excluded from using the good. Because firms cannot capture all of the benefits of R&D, they have socially suboptimal incentives to engage in it. Some studies of commercial innovation have concluded that, on average, originators only appropriate about half of the gains from R&D. Hence the value of government intervention in the market.
A second basis for intervention is that most of the benefits of climate change mitigation are so long-term as to be outside the planning horizons of private funding instruments. Private firms are obligated to focus on private costs, benefits, and discount rates in order to satisfy their shareholders. This can result in insufficient emphasis on returns, however valuable, that will not materialize until far into the future, when many shareholders have reached the Keynesian long-run. Moreover, these issues are compounded by the considerable uncertainty – and perceived uncertainty – about climate change, which renders long-term returns impossible to precisely quantify. Several studies have found that the “implicit discount rates” that firms use when making decisions about investment in long-term climate change mitigation are frequently much higher than market interest rates due to various market barriers and failures, such as inadequate information. Firms are generally suboptimally aware of energy conservation opportunities and often lack the expertise necessary to implement them – another instance of the underprovision of the public good of knowledge.
A final impediment to R&D funding is the asymmetry of information between innovators and potential investors about the prospects of new technologies. Innovators tend to be in a better position to assess the potential of their work, so favorable assessments are usually met with skepticism and demands for greater risk premiums. This intensifies the knowledge spillover problem because subsequent producers of a successful technology will be able to obtain financing on better terms. All of the above difficulties are surely magnified in today’s credit-constrained market, preventing even more of the up-front funding that R&D requires.
One means of addressing the mismatch of private and social returns is the enforcement of intellectual property rights. For example, patents grant innovators temporary monopolies on their innovations, which they can use to charge monopoly prices and thereby possibly recoup their share of the full social value of their innovations. In the absence of such protections, creators who market new technologies will likely soon face competition from others who take advantage of the technologies’ public availability and produce their own versions. Although the original developer may have a head start in marketing her development and may be able to command a greater market share due to her status as originator, these market-based incentives are tenuous, ephemeral, and uncertain. For instance, it is equally likely in theory that an imitator will be able to produce the good or service more cheaply or at a higher quality. It is also likely that consumers will anticipate the entrance of such competitors and therefore refrain from consumption during the innovator’s initial marketing. Thus economic analysis supports the use of intellectual property rights to ensure that inventors will be adequately driven by the profit motive. Patents are certainly a key component of any pro-innovation policy scheme.
However, intellectual property rights are neither a sufficient nor always desirable response to the failures of the R&D market. To begin with, much of the value of a given R&D project may consist of knowledge that is, for good reason, outside the scope of the intellectual property rights regime. The Stern Review offers the example of “tacit knowledge,” which is vague and does not satisfy the requirements of patentability. Another concern is that due to the inherent uncertainty of legal regulation, patents and the like do not always preclude all of the competition that prevents innovators from reaping their full rewards. Additionally, the prospect of monopoly pricing may not be a sufficient incentive to encourage risky, large-scale basic research. Analogously, pharmaceutical companies are known not to develop treatments for diseases that affect a sufficiently small segment of the population. As for the undesirability of robust intellectual property rights protection in the R&D context, one downside is that it can hinder or even cripple progress by preventing firms from building on each other’s successes and learning from each other’s failures. The early stages of innovation are often characterized by a high degree of uncertainty due to the lack of a well-defined path to progress. When there are multiple R&D avenues worth exploring, it pays to have multiple firms collaborating. Industry-wide collaboration is also vital for achieving big breakthroughs in basic science, such as those necessary to commercialize hydrogen fuel cell automobiles, which a single company is ill-equipped to deliver. This cooperation is unlikely to materialize if individual firms are given the incentive to keep their efforts under wraps while hoping to free-ride on the work of others.
In light of these concerns, many economists advocate direct government subsidization of technological innovation, especially at the level of R&D. This funding can take several forms, such as government-performed research, government contracts, grants, tax breaks, technology prizes, and incentives for students to study science and engineering. Of course, the government and private organizations can tailor these options to meet their needs in a given situation. Economists also promote government efforts to remedy the excessive myopia and uncertainty with which private decisions about long-term investment in climate change mitigation are fraught. These programs take a variety of forms, including educational workshops and training programs for professionals, advertising, product labeling, and energy audits of manufacturing plants.
Failures in the Market for Deployment and Diffusion
Although economists more strongly and consistently back government support for R&D, many also call for government responses to imperfections in the markets for technology deployment and diffusion. Successful R&D does not guarantee that the resulting innovation will immediately be deployed; market forces also govern firms’ decisions about technological implementation.
Economists have identified that firms in an industry may face a collective action problem when deciding whether to adopt new technologies that exhibit “dynamic increasing returns.” This phenomenon exists when the value of a technology to one user depends on how many other users have adopted the technology; in other words, the more users there are, the better off they will be. There are three types of positive externalities that generate dynamic increasing returns: “learning-by-using,” “learning-by-doing,” and “network externalities.” As in the R&D context, these externalities may give rise to a collective action problem because each firm can partake of the public fruits produced by the first adopters of a new technology; in turn, each firm is disinclined to be at the vanguard of technological adoption, and the deployment of worthwhile technology is unproductively delayed. Learning-by-using refers to the fact that the initial users of an innovation generate valuable public information about it, such as its existence, characteristics, and performance. Learning-by-doing, the “supply-side counterpart,” refers to the fact that production costs fall as firms gain experience, which they cannot fully keep to themselves. For one, a product itself usually provides insight into its production. Lastly, network externalities exist when the value of an innovation increases as others adopt a compatible product. Telephone and computer networks are obvious examples.
In addition to these externalities, certain characteristics of the power generation sector further deter and postpone the deployment of technologies that are expensive and commercially unproven. The sector is subject to a high degree of regulation and tends to be quite risk averse. Second, technologies that do not easily fit into existing infrastructures such as power grids and gas stations are unlikely to enter the market until demand rises and/or costs fall enough for the industry to act en masse. For example, national grids are usually designed with central, as opposed to distributed, power plants in mind, and CCS will require the construction of new pipelines. Third, there are market distortions such as the aforementioned fossil fuel subsidies, which make it even harder for new technologies to compete. Finally, energy markets tend not to be particularly competitive. The oil-market is a well-known oligopoly, dominated by a multinational cartel, and electricity generation is a natural monopoly, in that a single firm can provide power at the lowest social cost due to economies of scale.
These problems can be mitigated or eliminated if there exists a niche market that is willing to pay a high price for early access to an innovation, as was the case with the first mobile phones. Niche markets can enable the initial producer of an advanced technology to profit despite subsequently facing competition from other firms that have the benefit of following in its footsteps and taking advantage of the externalities described above. Otherwise, originators must hope that they can eventually turn a profit – for instance by initially selling at a loss and then maintaining a dominant market share as costs fall and the market expands, perhaps by virtue of customer goodwill. Neither of these scenarios is likely in the energy market due to the homogenous nature of the end product (e.g., electricity), which makes it difficult for innovators to distinguish themselves. Although niche markets for carbon-free electricity exist, they are too small to make the costly implementation of advanced technology worthwhile. Consequently, established technologies can become locked-in, progressing only incrementally. At worst, the Stern Review notes that “energy generation technologies can fall into a ‘valley of death’, where despite a concept being shown to work and have long-term profit potential they fail to find a market.”
There are several desirable public policy responses that can be tailored to different problems in different markets. As in the R&D context, various types of subsidies and information programs can counteract market failures. The government can also use energy efficiency standards, such as emissions quotas, to force firms in an industry to implement environmentally friendly technologies.
July 23, 2010
When Is It Reasonable to Be Angry With, or to Dislike, Someone?
Yesterday, out of the blue, Grobstein asked me whether I'd consider it reasonable to be angry with, or to dislike, someone just because he's causing me pain, however justified. Specifically, Grob asked whether I'd necessarily feel anger or hatred towards my torturer if I were being tortured, pursuant to a legitimate warrant, because I was suspected of knowing the location of a ticking time bomb.
I responded that I wouldn't, essentially because I consider it reasonable to dislike people based on their characteristics, not on their actions alone. For example, if someone got into a car accident with me, and it wasn't my fault, I wouldn't necessarily be angry with her; I would reserve judgment pending information regarding her state of mind -- was she, say, reckless, or was she doing her best but handicapped by inexperience? Similarly, for all I know my torturer is a cool guy who's just doing his job.
Upon further reflection, I think there's a noteworthy distinction between the reasonable grounds for disliking someone and the reasonable grounds for being angry with someone. When I'm legitimately angry with someone, I think it's necessarily because I believe she acted badly. I can't think of a situation in which I'd be legitimately angry with someone (as opposed to upset at my circumstances) but not think she should have behaved differently -- in a way that wouldn't have reasonably roused my wrath. Thus, I believe that being legitimately angry with someone implies that I consider her blameworthy.
On the other hand, I think the reasonable grounds for disliking someone are much broader, albeit circumscribed by proscriptions against prejudice, bigotry, and the like. Blameworthiness entails culpability, whereas one can reasonably be considered unlikable for all sorts of non-blameworthy reasons, such as having a bad sense of humor or being prejudiced against nerdy forms of entertainment. In other words, it's okay to dislike someone for having "bad" tastes, even if the existence of these tastes isn't his fault or, indeed, isn't a fault at all. However, we shouldn't be too quick to judge. We owe it to ourselves and others not to -- or at least to try not to -- feel distaste towards someone unless we have some idea of the content of her character.
I'm not sure how to defend my position in the abstract other than by noting that I regard attitudes such as anger and dislike as inescapably directed at dispositions, not actions. Although I typically say I feel angry at someone because of something he did, I think my anger stems from my view of the other person's motivations. Hurting me is not sufficient grounds for me to be angry with you; after all, you may have a good excuse, or even a justification. I may nevertheless be upset, but I wouldn't be upset with you -- at least, I don't think it would be reasonable for me to be, because you haven't exhibited an upsetting disposition. Analogously, I may be angry at losing a competition, but I shouldn't be angry with the winner if she was a good sport.
Ultimately, my position comes down to my view of anger and dislike as necessarily entailing judgments of character. When I say I'm angry with someone, I basically mean I think he's being an asshole. When I say I dislike someone, I basically mean that I find her unpleasant to deal with on the whole.
I'll add that I feel that reserving anger and dislike in these ways is a worthwhile form of self-mastery and facilitates good judgment. Consider the fundamental attribution error, which counsels against, for example, assuming that someone else who runs a red light must be a jerk (a dispositional explanation), while claiming that it was an emergency when we engaged in the same behavior (a situational explanation). We ought to scrutinize the bases of our anger and dislike lest we fall into such psychological traps -- lest we become jerks ourselves.
Grob disagrees with me -- he'd be angry with his torturer. Here's his view, as expressed to me in correspondence:
"It blows my mind that (you claim) you do not hate the torturer. You are not friends with your friends because they are the kind of people who are your friends. You are friends with them because of history and contingent circumstance. Given a different history and (especially) more social mastery, you would be friends with different people – indeed, different kinds of people. You give your kindness and loyalty to your friends (so I hope) even though they are not at bottom the most deserving. How could they be, given the happenstance that has led to your connections? Similarly, I hate the person who is my enemy, who is hurting or trying to destroy me, even if they have a good excuse, and even if in a counterfactual world we could have been friends. It’s bizarre to me that you (claim to) weigh the procedural safeguards before you decide how you feel about your tormentor – well, if there are torture warrants, and Posner signed off on mine, then shit. Perhaps those things are relevant. I do not think they are determinative. It seems crazy that you (claim to) believe that how someone is treating you is actually irrelevant to your relationship with that person.
"There may be some ideal sense in which hate is never an appropriate emotion, and we should all strive to be more Christ-like. Or perhaps we should learn to somehow accept but not condone hate in ourselves, so we do not dwell on it, or whatever. I try not to dwell on it. But I have not been searching for or describing ideal attitudes – just personal ones that I think are “reasonable.” It makes sense to point out here the scarier implications of Christ-like social ethics. According to Luke, J.C. says, “If anyone comes to Me and does not hate his father and mother, wife and children, brothers and sisters, yes, and his own life also, he cannot be My disciple.” In other words, these are ultimately contingent attachments or granfalloons – you just happen to be your parents’ children; why should you have any special feeling towards them? But attachments are probably psychologically impossible without loyalty, and loyalty means ignoring the merits and privileging contingent history. To be this way – as I think we must – means accepting to some extent that our feelings must be ruled by immediate circumstance, however arbitrary. To overcome this is a “self-mastery” that destroys something valuable."
What do you think?
I responded that I wouldn't, essentially because I consider it reasonable to dislike people based on their characteristics, not on their actions alone. For example, if someone got into a car accident with me, and it wasn't my fault, I wouldn't necessarily be angry with her; I would reserve judgment pending information regarding her state of mind -- was she, say, reckless, or was she doing her best but handicapped by inexperience? Similarly, for all I know my torturer is a cool guy who's just doing his job.
Upon further reflection, I think there's a noteworthy distinction between the reasonable grounds for disliking someone and the reasonable grounds for being angry with someone. When I'm legitimately angry with someone, I think it's necessarily because I believe she acted badly. I can't think of a situation in which I'd be legitimately angry with someone (as opposed to upset at my circumstances) but not think she should have behaved differently -- in a way that wouldn't have reasonably roused my wrath. Thus, I believe that being legitimately angry with someone implies that I consider her blameworthy.
On the other hand, I think the reasonable grounds for disliking someone are much broader, albeit circumscribed by proscriptions against prejudice, bigotry, and the like. Blameworthiness entails culpability, whereas one can reasonably be considered unlikable for all sorts of non-blameworthy reasons, such as having a bad sense of humor or being prejudiced against nerdy forms of entertainment. In other words, it's okay to dislike someone for having "bad" tastes, even if the existence of these tastes isn't his fault or, indeed, isn't a fault at all. However, we shouldn't be too quick to judge. We owe it to ourselves and others not to -- or at least to try not to -- feel distaste towards someone unless we have some idea of the content of her character.
I'm not sure how to defend my position in the abstract other than by noting that I regard attitudes such as anger and dislike as inescapably directed at dispositions, not actions. Although I typically say I feel angry at someone because of something he did, I think my anger stems from my view of the other person's motivations. Hurting me is not sufficient grounds for me to be angry with you; after all, you may have a good excuse, or even a justification. I may nevertheless be upset, but I wouldn't be upset with you -- at least, I don't think it would be reasonable for me to be, because you haven't exhibited an upsetting disposition. Analogously, I may be angry at losing a competition, but I shouldn't be angry with the winner if she was a good sport.
Ultimately, my position comes down to my view of anger and dislike as necessarily entailing judgments of character. When I say I'm angry with someone, I basically mean I think he's being an asshole. When I say I dislike someone, I basically mean that I find her unpleasant to deal with on the whole.
I'll add that I feel that reserving anger and dislike in these ways is a worthwhile form of self-mastery and facilitates good judgment. Consider the fundamental attribution error, which counsels against, for example, assuming that someone else who runs a red light must be a jerk (a dispositional explanation), while claiming that it was an emergency when we engaged in the same behavior (a situational explanation). We ought to scrutinize the bases of our anger and dislike lest we fall into such psychological traps -- lest we become jerks ourselves.
Grob disagrees with me -- he'd be angry with his torturer. Here's his view, as expressed to me in correspondence:
"It blows my mind that (you claim) you do not hate the torturer. You are not friends with your friends because they are the kind of people who are your friends. You are friends with them because of history and contingent circumstance. Given a different history and (especially) more social mastery, you would be friends with different people – indeed, different kinds of people. You give your kindness and loyalty to your friends (so I hope) even though they are not at bottom the most deserving. How could they be, given the happenstance that has led to your connections? Similarly, I hate the person who is my enemy, who is hurting or trying to destroy me, even if they have a good excuse, and even if in a counterfactual world we could have been friends. It’s bizarre to me that you (claim to) weigh the procedural safeguards before you decide how you feel about your tormentor – well, if there are torture warrants, and Posner signed off on mine, then shit. Perhaps those things are relevant. I do not think they are determinative. It seems crazy that you (claim to) believe that how someone is treating you is actually irrelevant to your relationship with that person.
"There may be some ideal sense in which hate is never an appropriate emotion, and we should all strive to be more Christ-like. Or perhaps we should learn to somehow accept but not condone hate in ourselves, so we do not dwell on it, or whatever. I try not to dwell on it. But I have not been searching for or describing ideal attitudes – just personal ones that I think are “reasonable.” It makes sense to point out here the scarier implications of Christ-like social ethics. According to Luke, J.C. says, “If anyone comes to Me and does not hate his father and mother, wife and children, brothers and sisters, yes, and his own life also, he cannot be My disciple.” In other words, these are ultimately contingent attachments or granfalloons – you just happen to be your parents’ children; why should you have any special feeling towards them? But attachments are probably psychologically impossible without loyalty, and loyalty means ignoring the merits and privileging contingent history. To be this way – as I think we must – means accepting to some extent that our feelings must be ruled by immediate circumstance, however arbitrary. To overcome this is a “self-mastery” that destroys something valuable."
What do you think?
June 25, 2010
Shameless Plug - Towards a Theory of Hybrid Speech
November 11, 2009
October 16, 2008
Exchange of the Day
Me: For whose sake have you forsaken me, for fuck's sake?
Grizzled Man: I'd forsake you for any man's sake. I like sake.
Grizzled Man: I'd forsake you for any man's sake. I like sake.
October 10, 2008
Casey Needs to Take a Mulligan
In the black corner, wearing nothing but a smile, "An Economy You Can Bank On." In the red corner, wearing an 800-pound-gorilla suit, "Rescuing Our Jobs And Savings: What G7/8 Leaders Can Do To Solve The Global Credit Crisis."
October 9, 2008
It's Over 9,000!
"What?! Nine thousand!" Either standards have gone the way of the economy, or Paul Krugman's blog editor is one refined and culturally literate individual.
October 8, 2008
A Defense of a Paulson-esque Plan that I Hadn't Come Across
A Wall Street friend of mine made the following point: one advantage of a government purchase of toxic mortgage-backed securities over an equity infusion (which apparently most economists support) is that the government can hold these assets to maturity without having to worry about being margin called. This matters because banks aren't the only entities holding lots of crumbling paper; hedge funds and other non-bailed-out institutions are likely to make forced sales of MBS, further driving down their value. Consequently, banks would require more and more capital from the government in order to stay afloat -- probably more than the government would initially overpay if it bought MBS and then received an equity "true-up." My friend argues that the absence of this idea from mainstream discourse stems from economists' underappreciation of the interconnectedness of the many players and games that caused the financial crisis. Contagion is a serious concern; perhaps an equity infusion is a mere treatment, whereas a Paulson-esque plan constitutes a quarantine.
UPDATE: this is not to suggest that my friend or I believe that a Paulson-esque plan is overall better than an equity infusion, just that the above reasoning may be part of the method to the madness.
UPDATE: this is not to suggest that my friend or I believe that a Paulson-esque plan is overall better than an equity infusion, just that the above reasoning may be part of the method to the madness.
October 7, 2008
Thinking in Tongues
"Barracuda" (on Sarah Palin's lifelong if-you-can't-be-them-beat-them response to intellectual elites)
October 6, 2008
October 5, 2008
October 3, 2008
October 2, 2008
September 30, 2008
How I Want to Be Remembered
"He had a clear, honest face. I found my fondness for him difficult to reconcile with what I knew of his enthusiasm for killing people and making small children cry." -- Rory Stewart, on Abdul Haq
September 28, 2008
September 26, 2008
September 25, 2008
September 10, 2008
The Second-Smartest Guys in the Room
"Wide-Ranging Ethics Scandal Emerges at Interior Dept." ("Modeled on a private-sector energy company," indeed. "[S]exual relationships with prohibited sources cannot, by definition, be arms-length.” On you, maybe.)
September 8, 2008
September 5, 2008
September 3, 2008
LOL
The Times reports: Ms. Palin, who served for years as the mayor of Wasilla, received words of support from her successor, Dianne Keller on Tuesday. The Politico’s Kenneth P. Vogel reports that Mayor Keller suggested to reporters who have descended on the small town “that Palin’s six years at the helm of Wasilla, population 7,000, combined with her 20 months as governor of Alaska leave her better equipped to handle the executive branch than her GOP running mate, John McCain, or his Democratic competitors Barack Obama and Joe Biden, all of whom are U.S. senators.”
September 1, 2008
August 31, 2008
Subscribe to:
Posts (Atom)