December 29, 2010

The Life Akratic With Gideon Rosen

I'm posting this discussion here because my blog seems to be a forum conveniens.

DG
Do you think people are inculpable for self-serving, mistaken moral judgments?

Me
I think it depends on why one made the mistake. Do you mean an honest (i.e., trying to do the right thing) mistaken moral judgment that happens to be self-serving? If so, the question seems to be whether the honest mistake was also acceptable and not negligent or the like.

That said, as you know, I don’t think there’s a satisfactory “model” for moral blame. I’m down with the Gideon Rosen stuff (which has a lot in common with the “basic argument” that Strawson outlined in The Stone but does not rely on determinism), which concludes that we should be skeptics about moral responsibility because all allegedly culpable acts presumably stem ultimately from nonculpable ignorance of some sort. Then there are alternatives such as your person-based position, which I haven’t gotten around to really thinking about (I still need to reread that long email you sent awhile back and read a handful of articles before I feel comfortable making any claims about it).

Speaking of reading things, I did very well on this paper I wrote for a philosophy seminar. I remember feeling confused upon rereading it, thinking I got mixed up somewhere, maybe got tautological. But it has at least some merit and is relevant to your question. I’d appreciate your thoughts.

DG
I suppose I think the existence of a self-serving tendency in mistaken judgments undermines the idea of an “honest” mistake; don’t you?

I think the tendency of someone’s mistakes to be convenient is a counterexample to the putative principle that mistaken judgments are non-culpable (which I take to be a premise of Rosen’s view, though I am not that well-versed in his views).

Me
On the traditional notion of moral responsibility, if someone reasonably tried to be honest (non-self-serving) in making a moral judgment but ended up getting it wrong in a way that was self-serving, he’s not culpable. Rosen would argue that one isn’t culpable for the self-serving tendency because it ultimately stems from something non-culpable. It’s a straightforward, neat argument.

DG
That’s really more like the “basic argument” from the NYT. Rosen’s position is (depends on the claim that), to be morally blameworthy, you must correctly judge an action to be wrong and nonetheless undertake that action. That loses its force if you can be culpable for taking a “convenient” wrong moral judgment. Now Rosen would retreat to his weaker notion of skepticism -- he is not saying that moral responsibility is impossible, just that it’s very hard to confidently identify. But I think the “convenience” of a moral judgment can appear so powerfully that this is not availing.

But yeah you’re eliding Rosen’s position with the blunter form of skepticism sketched in the “basic argument,” probably because you find that argument very persuasive.

Me
I do find it persuasive, and I’m not sure about the subtlety you’re trying to get at. I take it you’re imagining someone who convinces himself of a certain moral position by either (i) somehow intentionally convincing himself of it because it’s self-serving (culpable on the traditional account), (ii) negligently coming to believe it (culpable, albeit less so, on the traditional account), or (iii) reasonably coming to believe it in good faith, though perhaps unconsciously disposed to believe it due to its self-serving nature (not culpable). Are you simply saying that (i) and (ii) are often easy to identify? If so, so what? This just pushes the problem back a step -- is the person really culpable for intentionally or negligently coming to believe the position?

I guess an underlying issue is how much metacognition we should expect of people when making moral judgments. This is, of course, a question that must be resolved by intuition to avoid infinite regress. But we can come up with an answer (e.g., a “reasonable” amount) and practically judge people.

I imagine most people whose moral judgments are both really off base and really self-serving don’t really believe the judgments or are really good at something like self-deception (for which they may or may not be culpable on the traditional account).

All this stuff makes me curious about all these neuroscience-based “models” of cognition that people are working on that really mess with concepts such as belief and intention. A lot of this stuff seems to come down to these seemingly irresolvable (at least with our current tools!) age-old debates, such as whether akrasia is possible, etc. I’m not sure we can get anywhere worthwhile if we start by assuming one side of these questions.

JB
This is a rather thorny problem, I remember discussing it in college. I think Rosen is persuasive and sophisticated on this issue, and basically lays it out in the right way, although I think I come down on the other side of the result.

It doesn’t seem like I have in any of the forwards an explanation of what level of subtlety DG thought you were missing. But what seems missing to me is the virtue-ethical (character trait) component of the problem, which you ignore in this response but mention later when you discuss tendency to self-serving self-deception.

So, assume culpable akrasia. I think it’s much easier to then apply blame to a pervasive, self-serving character trait developed and maintained over time.

Any individual bad act or bad belief or whatever, you can apply the honest mistake argument. But if these are read as blameworthy merely because they supervene on bad character, well I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia. Which is, by hypothesis, culpable.

I think about this in essentially statistical terms. For an individual bad act there’s only one trial, the probability of doing the bad act as an honest mistake even without akrasia is high compared to the probability of doing the bad act as result of akrasia. For the self-serving character trait there are many trials, the probability of maintaining the character trait without culpable akrasia falls to zero as the number of trials increases. Thus the self-serving character trait is culpable, even if you are very willing to give people the benefit of the doubt as to honest mistake (except maybe in children, who’ve had less chance to take responsibility for their character traits over time, fewer trials, thus less justification for attributing their bad acts to culpable A.)

I thought this point of view was right in college, since then I’ve changed my mind about some pretty foundational issues without necessarily following through all of the implications, but it still seems reasonable to me.

Me
Don’t think DG really spelled it out, but it seems like the position you're advocating.

You say: “I don’t think there are very many reasons why you could have self-serving self-deceptive bad character over the long term other than akrasia.” Hmm. I was assuming that once the initial self-serving self-deception “kicks in,” the agent doesn’t reflectively maintain or reinforce it with every judgment; rather, it changes the basis of his specific moral deliberations such that in adhering to it he thinks he’s doing the right thing. In other words, I was assuming at most one “instance” of akrasia -- the self-deceived agent is already deceived and therefore no longer akratic. Is this psychologically realistic? I’m not sure, but I think it’s close. I imagine successful self-deception doesn’t involve much metacognition about the fact! The upshot is that it would be misguided to blame someone for “maintaining” a self-serving moral system that he’s simply come to believe (i.e., I’m not sure how realistic your statistical “model” is).

There remains, of course, the question of how likely it is that akrasia is the basis for an agent coming to believe a moral system that happens to be self-serving. To what extent are people really capable of intentionally sculpting their honest moral beliefs so that they will end up serving themselves whenever they subsequently make good faith moral judgments? I guess this is where Rosen raises an eyebrow. I forget what he says about the psychology of akrasia, but I guess he doubts that people have so much intentional control. This is where I start to wonder about the whole model of straightforward intentional decision-making. How do we really form “foundational” moral beliefs/systems? To what extent to we reflect on them when “applying” them? Can neuroscience help us?

Anyway, I think you raise some provocative points, but I’m inclined to side with Rosen, perhaps simply because of the basic difficulties with conceptualizing akrasia (even though I know a discussion of the traditional account of moral blame must assume its possibility!).

JB
Look, I agree with you, DG and Rosen that it all comes down to your concept of akrasia. I tend to think, for Kantian reasons, that however we construct our concept of free will, it must be in such a way that validates moral responsibility. I.e. that validating moral responsibility is a basic criterion that an account of free will has to satisfy. I understand that it’s hard to reconcile culpable akrasia with determinism or other core concepts (this is what Critique of Pure Reason is about, right? Inasmuch as I understood it). So we could talk about what free will would look like such that it validates akrasia, I think it’s hard not to end up with something that looks rather like religion, but that’s another story.

The other side of the issue is the model of character. Basically the question is your model of “bad character”: is a “bad character” strongly persistent, such that once it kicks in it requires no further culpable actions, or is it not. I think this is actually two issues: First can you get by without considering the issue of whether you are a bad guy, and need to think things through. Second, given that you’ve considered the issue is it realistic to expect to to try to resculpt your character so that you’re not such a bad guy any more.

I think issue one is pretty easy and you sort of brush it over in the opposite direction in a really implausible way. It’s not like you can just, you know, develop a trait of consistent self-deception and then never consider it again, unless you live in total isolation. Culture prompts you quite a lot to question your values and think about integrity. I think this is actually basically what a lot of popular culture is for, and why people are interested in it. Hey, sometimes individuals prompt you, although I’ve come to learn that this is considered rude.

Issue two is harder but look: I don’t think you really believe that it’s impossible for people to resculpt their character. I think you at most really believe either that it’s possible, but sort of too expensive for us to reasonably expect more than a small number of people to do it. Note that this isn’t really a logical or metaethical objection: There’s nothing about the concepts of right and wrong or blaming which says that it has to be easy to do the right thing. But many people have the intuition that if a moral system is so harsh that it cannot be satisfied without extraordinary pain or difficulty or effort then it is not a “moral” system at all (I don’t have this intuition). I don’t know what this issue is called, I think of it as the problem of moral gravity (in the sense of weight. I think I’m thinking of Giles Corey).

Look, neuroscience could tell us that it’s not really possible for people to resculpt their character but I don’t really expect it do do so.

Another dimension which I haven’t addressed arises from moral uncertainty; “right action” or “right character” aren’t known and might not really be knowable for an individual which might make us suspicious of the idea of restructuring our character, or suspicious of people who restructure their character.

PS, I think I don’t believe that people have foundational moral beliefs or systems. Or rather I don’t think that people really rely on them very much when making private moral decisions - I think the function that moral systems serve is more a way of organizing public moral decision-making in society.

Me
Interesting ideas in your first paragraph; seem promising to me. I also more or less agree with your last four paragraphs. But I want to distinguish two things that you seem to mention together in paragraph 4: the requirements of morality and the requirements of blameworthiness. I share your intuition that a correct moral obligation need not be “sufficiently easy” to follow, only possible. But the fact that it’s likely possible (i.e., with the right education, training, effort, etc.) for most sane people with bad characters to restructure their characters doesn’t mean that we are necessarily entitled to blame them for having bad characters; justified blame requires a judgment that someone reasonably should have acted differently. Now that I think about it, this is simply another way of saying that blame depends on how much control and understanding we could reasonably expect of someone (e.g., we'd blame the choleric teenager less than, I don’t know, Lord Russell for the same offense), but the right thing to do doesn’t operate on the same sliding scale.

As for issue one (“can you [realistically] get by without considering the issue of whether you are a bad guy”), I think it comes down to the level at which one’s self-reflection about his moral judgments occurs. For a simplified example, imagine someone who thinks at the levels of (i) what moral “system” (basically, consequentialist or deontological algorithms and heuristics) should I adopt, and (ii) am I reasonably and honestly applying it to a given situation? (I know this isn’t super realistic, but I think anyone who tries to be morally “consistent” necessarily bifurcates his reasoning like this, even if his moral system is incomplete, semi-conscious, and somewhat shifty.) As I touched on in my previous emails, I doubt that much straightforward intentional reasoning is involved in (i); I think it’s largely shaped by temperament and intuition (which, in turn, shape which moral arguments -- essentially intuition pumps -- appeal to a person and shape his moral system). Moreover, people don’t seem to reflect on (i) all that much. So it’s hard for me to see where akrasia is likely to enter the process at the level of (i). Sure, I think it’s inevitable that some self-interest seeps in, and likely with it self-deception, but I doubt whether akrasia is involved in the very act of self-deceiving. Effective self-deception seems to rely on internal mental opacity. And, of course, once someone has truly deceived himself such that he thinks he’s honestly arrived at an acceptable moral system, one is no longer akratic in unreflectively maintaining it.

I see more room for akrasia at the level of (ii). I mean, I can’t help but consider myself akratic in some cases (occasionally eating factory-farm-produced meat, downloading pirated music): I believe I do certain morally bad things because I derive enjoyment from them and am not sufficiently troubled by their badness. But I wonder about other people. And I wonder what neuroscience could tell us about this. I want to reread what Rosen says, because I think this is where the rubber hits the road. I vaguely remember him saying that he doesn’t really perceive himself as akratic, which surprises me. Then again, I’m under the impression -- and I confess to deriving a feeling of superiority from this! -- that most people are much less honestly self-reflective, and more self-deceptive, than I. (Or maybe they’re just better people, though they probably have worse moral systems!)

August 26, 2010

On "Chimp Brain"

A friend of mine believes that the desire for recognition and admiration is generally something to be overcome, not acted on. He maintains that this desire is a detrimental vestige of our simian ancestry, a maladaptive tendency in a world in which generalized status-seeking is only worthwhile for aspiring politicians, celebrities, and the like. In other words, people like me should stop thinking with their "chimp brains" and should instead focus on attaining more substantive returns such as knowledge about an interesting subject, better financial discipline, or the esteem of a few close friends. (Or we should become aspiring politicians, celebrities, and the like.) For instance, I shouldn't care if someone is wrong on the internet, except insofar as it shapes my position on an issue worth taking a position on.

As a blogger (someone is right on the internet!), simiophile, and all-around highly competitive person, this view ruffled my feathers. I wondered whether I'm indeed unduly concerned with what an unduly broad group of people think of me -- a group that surely includes some people who, taking after Howard Roark, don't think of me. After all, I was basically serious when, in my first post (on why I'm blogging), I wrote: "I want to show off. (It’s okay now that I admit it, right?) I want you to think I’m even more insightful, funny, interesting, reasonable, and infallible."

On reflection, I agree with my friend that I would be better off if my chimp brain were less active. Although I believe that most activities and interactions are inevitably competitive and relevant to one's status (think of, say, any conversation in which you were striving to be funny, smart, and/or sociable, even if you weren't consciously trying to outperform your friends), I would like to approach them in a less competitive and status-seeking manner. I would also like to devote more time and energy towards activities that provide me with non-status-based rewards (e.g., reading up on issues instead of blogging about them, assuming blogging even advances my status). But these things are easier said than done, and it's not clear to me what the optimal balance is -- competitiveness and status-seeking are not inherently bad things.

That said, I want to endeavor to act more in accordance with the higher parts of my brain. For one, I want to pick my intellectual battles more wisely. I've always been reluctant to end an argument by "agreeing to disagree," because I believe that the vast majority of disagreements between reasonable people are not the result of differences in values, of which true impasses are made. Rather, I think that given enough effort and patience, reasonable people can pin down and work out the empirical and/or logical differences that underlie their disagreements. But putting in -- and demanding -- such effort and patience is not always worth it; it depends on the importance of the issue in question and the characteristics of the parties, and it risks breeding animosity. Accordingly, I want to keep in mind that agreeing to disagree does not necessarily entail writing off one's interlocutor as unreasonable, irrational, or both (except on an internet forum) -- it can simply be the result of the mature recognition that the truth is not worth pursuing at all costs.

A second practical example of the more elevated thinking to which I aspire is, frankly, having more reasonable expectations about the amount of attention I can get by demanding it. To quote my initial post again, I wrote that "I'm always happy to devote some time to the works of friends; there's something markedly more interesting about the products of minds with which I am familiar." (Naturally, I made this statement in the context of blegging for readers.) Perhaps this is a common sentiment, but I feel it's particularly strong in me. For example, I would be eager to look at a friend's paintings or listen to a friend's music, even if I didn't expect them to be dripping with artistic merit (feel free to call me on this). Indeed, I feel compelled to read my friends' blogs (and, until a recent bout of sensibility, Google Reader feeds) in their entirety, even if not every post is my cup of tea. On the other hand, most people I know are much more selective in their attentions. They're willing to give my creations and recommendations some precedence, but they're more willing to just pursue their interests. Ultimately, I shouldn't expect others to share my interests so closely. People, no matter how compatible, are inescapably separated by myriad differences in genes and environment. And we're all full of foibles. Healthy relationships of all kinds thus involve tolerance, humility, and sacrifices. This, too, I will keep in mind.

In light of the above, this will probably be my last post. Thanks for reading.

August 24, 2010

In the Backseat

You may have come across this Pulitzer Prize-winning article about caring parents who carelessly leave their babies to die in their hot cars. The article rekindled my anger at the moralizing masses (likely the same people who make it impossible for state legislatures and prison wardens to end the counterproductive, torturous, and widespread practice of long-term solitary confinement) and sparked the following rants, culled from a couple of emails I wrote.

Most people's reactions to these cases ("frothing vitriol" in the author's words) -- like most people and their reactions to most bad things -- are unreasonable and disgusting. People need to be taught to reason about emotional issues. Why don't schools teach subjects such as personal finance and practical psychology (which, of course, has implications for personal finance)? I've long believed that understanding one's limitations is a significant step in freeing oneself from them. For example, I've fortunately always been disinclined to make the fundamental attribution error, but learning about it (in high school) -- about how demonstrably flawed most people's judgments are -- really hammered the point home. Unfortunately my psychology teacher didn't emphasize that the experiments we studied are revealing about how we are inclined to think and act in the real world. Although this observation is obvious to us -- it's the whole idea of experimental psychology -- that doesn't mean it shouldn't be underscored in the classroom. A little preaching can be a good thing.

Also, we should have trained, vetted, professional jurors.

***

Basically, it's essential to think about one's own thinking -- to metacogitate -- and to not just react like so many people do. I think the main reason why we prosecute 60% of the parents who unintentionally leave their babies to die in their cars is because people think they could never do something like that, that it's something only a monster or a reckless person could do. That's not true, and prosecuting the parents is counterproductive -- it costs society resources that could be used to prosecute real criminals; it further ruins the lives of these parents and the rest of their families (including any other kids they have to care for); and it encourages a moralistic, as opposed to a practical, justice system. People often talk about being willing to leave certain matters "in God's hands." Well, this is precisely the kind of situation where the human justice system should lay off.

August 12, 2010

The Economic Justifications for Government Support of Technological Advancement

The following is culled from a paper I wrote (footnotes omitted):

The development and deployment of technologies for combating climate change should not be left to the private sector alone, even if governments were to take the essential step of pricing the externality of greenhouse gas emissions. Economics demonstrates that implementing a carbon tax or an emissions permit trading system is the most cost-effective method of achieving the indispensable goal of inducing private actors to factor the social cost of emissions into their decisions. Instituting such a policy is the single most significant step that governments can take to mitigate climate change. But it is a necessary step, not a sufficient one. For economics also demonstrates that the technology sector is plagued by its own set of market failures, which entail that emissions pricing alone will not give firms the optimal incentive to develop and deploy technologies for producing cleaner energy. In turn, the marginal cost of achieving a given unit of emissions reduction will be higher than is ideal. The public sector must intervene in order to ensure the efficient level of technological investment. As Adam B. Jaffe, Richard G. Newell, and Robert N. Stavins aptly recapitulate, we should
view technological change relative to the environment as occurring at the nexus of two distinct and important market failures: pollution represents a negative externality, and new technology generates positive externalities. Hence, in the absence of public policy, new technology for pollution reduction is, from an analytical perspective, doubly underprovided by markets.
The importance of factoring technological change into an analysis of the cost of abating greenhouse gas emissions should not be underestimated. As explained previously, the development of new technologies, the commercialization of viable innovations, and the employment of readily available advancements make up the world’s toolkit for enabling the benefits of ravenous energy consumption not to come at the cost of our planet and our future. The aforementioned authors note that “the single largest source of difference among modelers’ predictions of the cost of climate policy is often differences in assumptions about the future rate and direction of technological change.” Good technology policy should render these assumptions more favorable, thereby lowering the expected cost of emissions abatement.

The first subsection below explicates the failures in the R&D market that justify public support. The second subsection deals specifically with the related, but distinct, set of market failures that impede the private deployment and diffusion of clean energy technology.

Failures in the Market for Research and Development

The primary failure in the R&D market is that technological innovation creates positive externalities in the form of “knowledge spillovers,” so the market produces too little innovation. R&D generates knowledge that has the characteristics of a public good: one individual’s consumption of the good does not reduce the amount of the good available for consumption by others, and no one can effectively be excluded from using the good. Because firms cannot capture all of the benefits of R&D, they have socially suboptimal incentives to engage in it. Some studies of commercial innovation have concluded that, on average, originators only appropriate about half of the gains from R&D. Hence the value of government intervention in the market.

A second basis for intervention is that most of the benefits of climate change mitigation are so long-term as to be outside the planning horizons of private funding instruments. Private firms are obligated to focus on private costs, benefits, and discount rates in order to satisfy their shareholders. This can result in insufficient emphasis on returns, however valuable, that will not materialize until far into the future, when many shareholders have reached the Keynesian long-run. Moreover, these issues are compounded by the considerable uncertainty – and perceived uncertainty – about climate change, which renders long-term returns impossible to precisely quantify. Several studies have found that the “implicit discount rates” that firms use when making decisions about investment in long-term climate change mitigation are frequently much higher than market interest rates due to various market barriers and failures, such as inadequate information. Firms are generally suboptimally aware of energy conservation opportunities and often lack the expertise necessary to implement them – another instance of the underprovision of the public good of knowledge.

A final impediment to R&D funding is the asymmetry of information between innovators and potential investors about the prospects of new technologies. Innovators tend to be in a better position to assess the potential of their work, so favorable assessments are usually met with skepticism and demands for greater risk premiums. This intensifies the knowledge spillover problem because subsequent producers of a successful technology will be able to obtain financing on better terms. All of the above difficulties are surely magnified in today’s credit-constrained market, preventing even more of the up-front funding that R&D requires.

One means of addressing the mismatch of private and social returns is the enforcement of intellectual property rights. For example, patents grant innovators temporary monopolies on their innovations, which they can use to charge monopoly prices and thereby possibly recoup their share of the full social value of their innovations. In the absence of such protections, creators who market new technologies will likely soon face competition from others who take advantage of the technologies’ public availability and produce their own versions. Although the original developer may have a head start in marketing her development and may be able to command a greater market share due to her status as originator, these market-based incentives are tenuous, ephemeral, and uncertain. For instance, it is equally likely in theory that an imitator will be able to produce the good or service more cheaply or at a higher quality. It is also likely that consumers will anticipate the entrance of such competitors and therefore refrain from consumption during the innovator’s initial marketing. Thus economic analysis supports the use of intellectual property rights to ensure that inventors will be adequately driven by the profit motive. Patents are certainly a key component of any pro-innovation policy scheme.

However, intellectual property rights are neither a sufficient nor always desirable response to the failures of the R&D market. To begin with, much of the value of a given R&D project may consist of knowledge that is, for good reason, outside the scope of the intellectual property rights regime. The Stern Review offers the example of “tacit knowledge,” which is vague and does not satisfy the requirements of patentability. Another concern is that due to the inherent uncertainty of legal regulation, patents and the like do not always preclude all of the competition that prevents innovators from reaping their full rewards. Additionally, the prospect of monopoly pricing may not be a sufficient incentive to encourage risky, large-scale basic research. Analogously, pharmaceutical companies are known not to develop treatments for diseases that affect a sufficiently small segment of the population. As for the undesirability of robust intellectual property rights protection in the R&D context, one downside is that it can hinder or even cripple progress by preventing firms from building on each other’s successes and learning from each other’s failures. The early stages of innovation are often characterized by a high degree of uncertainty due to the lack of a well-defined path to progress. When there are multiple R&D avenues worth exploring, it pays to have multiple firms collaborating. Industry-wide collaboration is also vital for achieving big breakthroughs in basic science, such as those necessary to commercialize hydrogen fuel cell automobiles, which a single company is ill-equipped to deliver. This cooperation is unlikely to materialize if individual firms are given the incentive to keep their efforts under wraps while hoping to free-ride on the work of others.

In light of these concerns, many economists advocate direct government subsidization of technological innovation, especially at the level of R&D. This funding can take several forms, such as government-performed research, government contracts, grants, tax breaks, technology prizes, and incentives for students to study science and engineering. Of course, the government and private organizations can tailor these options to meet their needs in a given situation. Economists also promote government efforts to remedy the excessive myopia and uncertainty with which private decisions about long-term investment in climate change mitigation are fraught. These programs take a variety of forms, including educational workshops and training programs for professionals, advertising, product labeling, and energy audits of manufacturing plants.

Failures in the Market for Deployment and Diffusion

Although economists more strongly and consistently back government support for R&D, many also call for government responses to imperfections in the markets for technology deployment and diffusion. Successful R&D does not guarantee that the resulting innovation will immediately be deployed; market forces also govern firms’ decisions about technological implementation.

Economists have identified that firms in an industry may face a collective action problem when deciding whether to adopt new technologies that exhibit “dynamic increasing returns.” This phenomenon exists when the value of a technology to one user depends on how many other users have adopted the technology; in other words, the more users there are, the better off they will be. There are three types of positive externalities that generate dynamic increasing returns: “learning-by-using,” “learning-by-doing,” and “network externalities.” As in the R&D context, these externalities may give rise to a collective action problem because each firm can partake of the public fruits produced by the first adopters of a new technology; in turn, each firm is disinclined to be at the vanguard of technological adoption, and the deployment of worthwhile technology is unproductively delayed. Learning-by-using refers to the fact that the initial users of an innovation generate valuable public information about it, such as its existence, characteristics, and performance. Learning-by-doing, the “supply-side counterpart,” refers to the fact that production costs fall as firms gain experience, which they cannot fully keep to themselves. For one, a product itself usually provides insight into its production. Lastly, network externalities exist when the value of an innovation increases as others adopt a compatible product. Telephone and computer networks are obvious examples.

In addition to these externalities, certain characteristics of the power generation sector further deter and postpone the deployment of technologies that are expensive and commercially unproven. The sector is subject to a high degree of regulation and tends to be quite risk averse. Second, technologies that do not easily fit into existing infrastructures such as power grids and gas stations are unlikely to enter the market until demand rises and/or costs fall enough for the industry to act en masse. For example, national grids are usually designed with central, as opposed to distributed, power plants in mind, and CCS will require the construction of new pipelines. Third, there are market distortions such as the aforementioned fossil fuel subsidies, which make it even harder for new technologies to compete. Finally, energy markets tend not to be particularly competitive. The oil-market is a well-known oligopoly, dominated by a multinational cartel, and electricity generation is a natural monopoly, in that a single firm can provide power at the lowest social cost due to economies of scale.

These problems can be mitigated or eliminated if there exists a niche market that is willing to pay a high price for early access to an innovation, as was the case with the first mobile phones. Niche markets can enable the initial producer of an advanced technology to profit despite subsequently facing competition from other firms that have the benefit of following in its footsteps and taking advantage of the externalities described above. Otherwise, originators must hope that they can eventually turn a profit – for instance by initially selling at a loss and then maintaining a dominant market share as costs fall and the market expands, perhaps by virtue of customer goodwill. Neither of these scenarios is likely in the energy market due to the homogenous nature of the end product (e.g., electricity), which makes it difficult for innovators to distinguish themselves. Although niche markets for carbon-free electricity exist, they are too small to make the costly implementation of advanced technology worthwhile. Consequently, established technologies can become locked-in, progressing only incrementally. At worst, the Stern Review notes that “energy generation technologies can fall into a ‘valley of death’, where despite a concept being shown to work and have long-term profit potential they fail to find a market.”

There are several desirable public policy responses that can be tailored to different problems in different markets. As in the R&D context, various types of subsidies and information programs can counteract market failures. The government can also use energy efficiency standards, such as emissions quotas, to force firms in an industry to implement environmentally friendly technologies.

July 23, 2010

When Is It Reasonable to Be Angry With, or to Dislike, Someone?

Yesterday, out of the blue, Grobstein asked me whether I'd consider it reasonable to be angry with, or to dislike, someone just because he's causing me pain, however justified. Specifically, Grob asked whether I'd necessarily feel anger or hatred towards my torturer if I were being tortured, pursuant to a legitimate warrant, because I was suspected of knowing the location of a ticking time bomb.

I responded that I wouldn't, essentially because I consider it reasonable to dislike people based on their characteristics, not on their actions alone. For example, if someone got into a car accident with me, and it wasn't my fault, I wouldn't necessarily be angry with her; I would reserve judgment pending information regarding her state of mind -- was she, say, reckless, or was she doing her best but handicapped by inexperience? Similarly, for all I know my torturer is a cool guy who's just doing his job.

Upon further reflection, I think there's a noteworthy distinction between the reasonable grounds for disliking someone and the reasonable grounds for being angry with someone. When I'm legitimately angry with someone, I think it's necessarily because I believe she acted badly. I can't think of a situation in which I'd be legitimately angry with someone (as opposed to upset at my circumstances) but not think she should have behaved differently -- in a way that wouldn't have reasonably roused my wrath. Thus, I believe that being legitimately angry with someone implies that I consider her blameworthy.

On the other hand, I think the reasonable grounds for disliking someone are much broader, albeit circumscribed by proscriptions against prejudice, bigotry, and the like. Blameworthiness entails culpability, whereas one can reasonably be considered unlikable for all sorts of non-blameworthy reasons, such as having a bad sense of humor or being prejudiced against nerdy forms of entertainment. In other words, it's okay to dislike someone for having "bad" tastes, even if the existence of these tastes isn't his fault or, indeed, isn't a fault at all. However, we shouldn't be too quick to judge. We owe it to ourselves and others not to -- or at least to try not to -- feel distaste towards someone unless we have some idea of the content of her character.

I'm not sure how to defend my position in the abstract other than by noting that I regard attitudes such as anger and dislike as inescapably directed at dispositions, not actions. Although I typically say I feel angry at someone because of something he did, I think my anger stems from my view of the other person's motivations. Hurting me is not sufficient grounds for me to be angry with you; after all, you may have a good excuse, or even a justification. I may nevertheless be upset, but I wouldn't be upset with you -- at least, I don't think it would be reasonable for me to be, because you haven't exhibited an upsetting disposition. Analogously, I may be angry at losing a competition, but I shouldn't be angry with the winner if she was a good sport.

Ultimately, my position comes down to my view of anger and dislike as necessarily entailing judgments of character. When I say I'm angry with someone, I basically mean I think he's being an asshole. When I say I dislike someone, I basically mean that I find her unpleasant to deal with on the whole.

I'll add that I feel that reserving anger and dislike in these ways is a worthwhile form of self-mastery and facilitates good judgment. Consider the fundamental attribution error, which counsels against, for example, assuming that someone else who runs a red light must be a jerk (a dispositional explanation), while claiming that it was an emergency when we engaged in the same behavior (a situational explanation). We ought to scrutinize the bases of our anger and dislike lest we fall into such psychological traps -- lest we become jerks ourselves.

Grob disagrees with me -- he'd be angry with his torturer. Here's his view, as expressed to me in correspondence:

"It blows my mind that (you claim) you do not hate the torturer. You are not friends with your friends because they are the kind of people who are your friends. You are friends with them because of history and contingent circumstance. Given a different history and (especially) more social mastery, you would be friends with different people – indeed, different kinds of people. You give your kindness and loyalty to your friends (so I hope) even though they are not at bottom the most deserving. How could they be, given the happenstance that has led to your connections? Similarly, I hate the person who is my enemy, who is hurting or trying to destroy me, even if they have a good excuse, and even if in a counterfactual world we could have been friends. It’s bizarre to me that you (claim to) weigh the procedural safeguards before you decide how you feel about your tormentor – well, if there are torture warrants, and Posner signed off on mine, then shit. Perhaps those things are relevant. I do not think they are determinative. It seems crazy that you (claim to) believe that how someone is treating you is actually irrelevant to your relationship with that person.

"There may be some ideal sense in which hate is never an appropriate emotion, and we should all strive to be more Christ-like. Or perhaps we should learn to somehow accept but not condone hate in ourselves, so we do not dwell on it, or whatever. I try not to dwell on it. But I have not been searching for or describing ideal attitudes – just personal ones that I think are “reasonable.” It makes sense to point out here the scarier implications of Christ-like social ethics. According to Luke, J.C. says, “If anyone comes to Me and does not hate his father and mother, wife and children, brothers and sisters, yes, and his own life also, he cannot be My disciple.” In other words, these are ultimately contingent attachments or granfalloons – you just happen to be your parents’ children; why should you have any special feeling towards them? But attachments are probably psychologically impossible without loyalty, and loyalty means ignoring the merits and privileging contingent history. To be this way – as I think we must – means accepting to some extent that our feelings must be ruled by immediate circumstance, however arbitrary. To overcome this is a “self-mastery” that destroys something valuable."

What do you think?