I finally read Corey Robin’s book, The Reactionary Mind: Conservatism From Edmund Burke to Sarah Palin, last week. I know I’m about a year late to this party, but I’ve been waiting to read it until the time came for me to shift gears from ancient to contemporary political theory.
It’s a very quick read and, as a series of short essays, it’s well worth your time if you’re interested in the combination of political theory and comtemporary politics that I try for at this blog and that Robin succeeds at doing on his blog.
As a book, though, it doesn’t quite accomplish what I was hoping Robin was setting out to do. Robin’s goal is to demonstrate the connection between disparate thinkers, politicians, and jurists on the Right. And that connection, mostly, is that they all subscribe to the notion that elites should have power and the common people should not. Robin alleges that conservatism in all its forms equates with reaction against the democratizing forces of the Left that have, over time, attempted to propel more and more people into the public sphere to actively participate in their own governance.
I ended my first bit of navel-gazing with my decision to go to Duke for graduate school. And a bunch of people asked me to do a Part 2, about grad school and, perhaps, my first job. So … here goes:
All the things I suspected I’d like about Duke turned out generally to be true. The faculty was great … and it got better when Duke hired Peter Euben, who became my dissertation advisor (or, I suppose, co-advisor, with Elizabeth Kiss). The campus was an amazing place to study. The weather was delightful (apart from the disgustingly humid summers and the occasional hurricane). And I fell in with a really special group of friends (who were so impossibly nerdy that we even came up with a nickname for ourselves). My experience as a first year grad student at Duke was so good, in fact, that I even talked one of my best friends from Michigan State into joining me there when he graduated the following year.
As a political theory student, I was advised to take every political theory course that was offered and maybe even some philosophy courses as well. Looking back on it, I took every course offered by nearly every political theorist on the faculty and then I also took a couple of philosophy courses (including Alasdair MacIntyre’s last seminar there). I took seminars on Locke, Nietzsche, Heidegger, Foucault, Arendt and Habermas, and probably a few others I’m forgetting. The notes I took in those courses formed the backbone of the lectures I’d write when I started teaching my own theory classes. In short, the best preparation for preparing classes of my own were the classes I took (both as an undergrad, since I also still use those notes, and as a grad student).
I also took a series of international relations seminars (my second field) from some of the scholars whose books and articles I’d read as an undergrad. This was both a little bit thrilling and a whole lot intimidating, if I’m being honest. Still, the notes I took in those classes prepared me really well for my comprehensive exams (more below).
I got some experience as a teaching assistant for an amazing human rights course (on which I based the human rights course I’ve now taught one and off for a decade) and for an introductory IR course. It was when I was TAing for that human rights course that I first got involved in service learning and, as a result, spent a whole lot of time visiting men on North Carolina’s death row. Seeing the value of service learning to a human rights education first-hand convinced me to make it a part of the human rights program that I now direct here at Nebraska. It was also during that first stint as a TA that I really felt like teaching was something I might be good at doing. I can still remember the discussion section I led and the excitement of coming in every week to talk with a small group of really gifted upper-level undergrads about the problems and promise of the idea of human rights.
I started studying for my comprehensive exams in the fall of my third year, if memory serves me. I planned to take the two day-long tests in the spring. Graduate school in general can be a bit isolating … even if you have a great group of friends. And studying for exams can be really, really isolating. It’s possible to put a group together but, at least for political theory, a whole lot of the preparatory work boils down to reading, writing up notes, and putting those notes in an order that will be helpful at the moment you open the email message that contains the exam questions. I still have all of the notes I took to prepare for comps and I still use them when I prep a new class.
I went to the state fair with a friend of mine, bought a jade plant, and sat on my couch, with my plant on the coffee table, and I read books and articles for months. When I took the exams, the plant moved to my desk upstairs and watched me take them. That little plant went with me when I moved to Virginia for my first job and then it moved out to Nebraska. It’s probably also worth noting that a friend and I decided not to cut our hair while we were studying. Then, after we’d written the exams (but before our oral defense of them, if I remember correctly), we went together and got our hair cut at the campus barber shop (which only cut hair one way: “Regular boy’s haircut”).
After passing my comps (and thereby earning my Master’s), I could teach my own classes. And I taught a lot of them. My first was a writing course with a public apology and restorative justice theme. After that, I taught an upper-level ethnic conflict course (twice) and then got hired to teach a couple of classes at Wake Forest University (a seminar on Marx and a seminar on human rights). All told, I taught five of my own courses as a grad student; it was a lot of teaching to do while I was writing my dissertation but it was incredible preparation. I had three different courses prepped and tested out before I got my first job, which meant that I had less prep work to do. I ended up teaching some variant of all three of those courses in my first two years of my first job, which meant I had more time to work on publishing articles and turning my dissertation into a book.
Speaking of the dissertation, I had a pretty good sense of my topic before I left Michigan State; I knew I wanted to write about human rights and, as a result of my first TA gig, I read a book that set out the problem I thought I could address, namely whether or not the idea of human rights could be understood without the religious foundation that seems to be its foundation. I spent months reading and kicking the idea around with my advisor, and then I started writing the prospectus (which would eventually become part of the introduction). All told, the dissertation took me just under two years and I moved through it as quickly as I did because my advisor was saintly enough to meet with me every other week, either to discuss something I’d drafted or to kick the tires on my idea for whatever chapter I was working on at that time. If you think preparing for exams is solitary, dissertation writing is a whole different world of loneliness. There was a point in time where my schedule, for months, was this:
Wake up, make breakfast and drink coffee, go upstairs and write for four hours, come downstairs to make and eat lunch, go upstairs and write for four hours, come downstairs to make and eat dinner, find someone who would come over, go out, or talk on the phone, sleep.
I only had the dissertation committee I had, but I know this: Picking the right dissertation committee matters a lot. I knew it mattered when I picked them, but the amount you think it matters should probably at least be doubled. These are really the people who shepherd you through the process, who teach you about completing a project of this magnitude, who keep you on track, who push you to make it as good as it can be, who help to ensure you finish it, who explain the weird academic publishing game, and who go to bat for you on the job market. I won’t ever forget spending all those hours in Elizabeth Kiss’ office or hanging out at Peter Euben’s house (or teaching him how to use email, which is another story entirely). It’s been almost ten years since I left Duke and half of my committee isn’t even on the faculty there today … but I know they’d read and comment on something I wrote or write a letter of support on my behalf if I dropped them a line out of the blue. They’re first-rate scholars and terrific people.
Thanks to my committee, and a weird stroke of luck, I actually got a tenure-track job before I finished my dissertation. I planned to do a limited job search in my fifth year, applying for jobs that were either too good to pass up and jobs that seemed to call for someone who did exactly what I did. I applied for three or four jobs, I think, and one of them called me for a phone interview. The phone interview went well and they invited me to campus. This was James Madison University, which was looking to hire someone with a theory backgroup who was interested in social justice topics (like, for example, human rights) to help start a new department called Justice Studies; the chair of the search committee turned out to be a political theorist trained at Duke. I drove from Durham to Harrisonburg, met the political science faculty, the search committee, and one of the deans, taught a political theory class, and ate a couple of meals. There was no “job talk.”
I have no earthly idea why they hired me, apart from the fact that I studied precisely what they said they were looking for someone to do. My only publication was a co-authored encyclopedia article on Thucydides and I suspect my letters of recommendation spoke highly of my future prospects. I think I had decent answers to their questions about how I thought the Justice Studies program should be put together and I think I did a good job with the class they asked me to teach. Beyond that, you’d have to talk to the good people on the search committee.
I spent three years teaching at James Madison and they were a very good three years. Maybe I’ll write a little about JMU, publishing, finishing my first book, and my decision to leave for Nebraska, if anyone would be interested in that.
An anonymous reader sent in the following comment in response to my most recent post about the importance of not calling just anyone or anything heroic:
Heroism is relative. Each subculture is going to have its own definition and heroes. Don’t tell different groups of people who and what to look up to. sometimes a parent raising a kid is superhuman in that environment. How is what you are saying different from the concept of beauty?
This example of a parent raising a child in a difficult environment is a good one with which to begin my reply, though we’ll get away from it before too long.
We can imagine all sorts of circumstances in which it’s very likely that the child will say that the parent is his or her hero. But does that make it the case that the parent actually is heroic?
I submit that it doesn’t … at least not necessarily.
The distinction that I want to draw – and that I think is probably going to be helpful for thinking about heroism going forward – is between someone who is a hero and someone who is your hero.
My friend Scott Allison, who has co-authored a couple of books and regularly blogs on the topic of heroism, recently replied to a critical post of mine from a little over a year ago.
In my original post, I took Allison to task for the way in which his studies merely report what other people say about their heroes, rather than pushing the conversation forward by challenging some of these (to my mind, at least) not-very-heroic heroes. More than that, I argued that Allison’s most serious problem is his willingness to conclude that heroism is like a good meal, that it’s in the eye of the beholder.
Allison’s response is that it’s the job of a good social scientist to be impartial:
There isn’t as much consensus about what defines a hero as one would think. Most people agree that heroes perform great actions, but one observer’s idea of a great action may be very different from that of another observer. Just as evil-doers dismiss the idea that they are evil-doers, heroes themselves often dismiss the idea that they are heroes. As such, my co-author George Goethals and I have adopted a view of heroism that is identical to that of Baumeister’s definition of evil: It’s in the eye of the beholder.
This definition is very unsatisfying to people who claim to know the objective definition of heroism. Goethals and I have asked hundreds of people to list their heroes and our position is that it’s not our place, as social scientists, to judge people as “wrong”. If tennis players report that tennis great Roger Federer is their hero, we are not going to tell them they are mistaken. If aspiring actresses list Meryl Streep as their hero, we will report it without condemning their judgment. Our goal is to try to understand their reasoning behind their choices.
Now I want to see see if I can push on Allison’s central claim a bit more, in the hopes of clarifying the argument against relativism that I’ve been making.
I teach and write about human rights, a topic that can be quite divisive. Given Allison’s comparison with studying good and evil, I think a comparison to studying human rights and human rights abuses is apt. One of the central critiques of human rights is that they are Western in origin and thus that non-Western cultures will adhere to a different set than Western cultures do (or they won’t think of rights in the way Westerns do at all).
To study human rights as a social scientist, then, Allison would argue that I ought to impartially report that some people view human rights as a stopgap against governmental abuse while others view it as nothing more than a useless construct that hamstrings governments from making decisions they might need (or want) to make. On my reading, one of these views is incorrect and often results in acceptance of the worst sorts of abuses people can perpetrate against other people. And, having studied human rights for years, I have a great many reasons to support this conclusion of mine (that they exist and that “culture” isn’t much of a reason to condone abuses). I might, in that case, report people’s critique of human rights and then make an argument that attempts to refute it and thereby buttress the contemporary human rights regime (or at least continue the debate).
But Allison’s suggestion seems to be that an impartial social scientist ought simply to report on people’s thoughts about human rights without weighing in. As he says, “our position is that it’s not our place, as social scientists, to judge people as ‘wrong.’” Human rights, then, would simply be in the eye of the beholder. Some people would claim they exist, some people would claim that they don’t … and the good social scientist would simply say, “Here are some interesting claims people make about human rights, which leads to the conclusion that they’re in the eye of the beholder.”
But the conclusion that human rights are in the eye of the beholder — existing for some but not for others — is actually supportive of the conclusion that they don’t really exist. Why? Because it accepts (certainly implicitly but maybe explicitly as well) the claim of those who reject human rights (and thus might embrace human rights abuse).
That’s why, I want to argue, any claim of simply reporting — of saying, “we heard a lot of different opinions and there’s just no way to make a claim that some are right and others are not” — always endorses the lowest bar or least powerful claim about the topic under scutiny. If someone claims that Stalin is a hero or a cactus is a hero and a researcher says, “well, heroism is in the eye of the beholder,” then the researcher’s implicit argument is that absolutely anyone or anything is a hero because there’s no way to judge one person’s claim from another person’s without taking sides, being partial.
On my way of thinking, these researchers are simply wrong when they claim that evil or human rights or heroism is in the eye of the beholder. It’s one thing to report on what they’ve said and then to explain why someone whose hero is Stalin or a cactus is thinking wrongly about heroism. It’s quite another to say, “Here’s what these people said about the heroism of Stalin and cacti … and maybe they’re just as right as someone who named Holocaust rescuers as heroes because, after all, heroism is in the eye of the beholder.” Allison’s research might suggest that people have a lot of different ideas about heroism, but it might also suggest that people simply aren’t thinking very critically or carefully about heroism.
The same is true of evil; evil-doers might certainly claim that their actions aren’t evil … but that doesn’t mean that evil is in the eye of the beholder; it might simply mean that people don’t want to think of themselves as doing something evil. As a reseacher, I might say, “Person X, who participated in a genocide, claimed that his behavior wasn’t evil. This demonstrates not that he’s right or that genocide is right for some people and wrong for others, but simply that people are quick to look past their own shortcomings, to find ways to justify their own behavior, or to avoid making harsh judgments about their own decisions or preferences.”
There’s no reason reporting that a human rights abuser doesn’t believe in human rights yields the conclusion that he might be just as right about his belief as someone who works to defend human rights. Good social science doesn’t need to embrace relativism.
And Allison seems to recognize this at the end of his reply, even as he still pulls back from embracing the notion that social science can report on the results of a survey while still making a broader point about the topic at hand:
Goethals and I have found that as people get older, they become more discriminating in their choice of heroes. People tend to outgrow celebrity and sports heroes who only show signs of competence but not much morality …. As a social scientist who should remain objective about my reporting of heroes, I shouldn’t express my opinion about the natural maturation process leading people to place greater weight on morality than on competence when choosing heroes. But I can’t resist saying I’m glad to hear it.
Allison’s mistake — which seems to inform his reply and his work more generally — is he’s convinced that anything he might write about the shift from LeBron James to Martin Luther King, Jr. would simply be his opinion. But what Allison calls a “natural maturation process” is absolutely begging to be studied by social scientists!
Why do people turn to moral heroes as they get older, rejecting the celebrities they idolized in their youth? What makes this a maturation — a word that implies something positive is taking place — rather than simply a change? Are there ways for people to make more mature decisions about their heroes earlier in life? And if, as he says, he’s glad about this natural maturation process, why is Allison so unwilling to make an argument about why it’s a sign of immaturity about heroism to idolize LeBron?
As I’ve argued — on this blog and in my forthcoming book — we can learn a lot about ourselves by carefully considering our choice of heroes. But this careful consideration requires more than simply listing them and then saying, “Everyone has a different hero and so there’s no way to say one person is a hero and another person isn’t.” If King is the choice of someone whose thinking about heroism has matured, a great place to start our careful consideration is to reject the notion that LeBron and King are somehow equivalent and to consider what makes King a more worthy hero than LeBron.
I bet we could come up with a lot of interesting reasons that could then be debated and tested using good social science methods.
The problem with Peter Singer’s account is not only that a lot of people would consider it to be monstrous but also that it’s based on what I take to be an unsupportable distinction.
At what point, one might justifiably wonder, does a fetus gain a right to life: conception, viability, birth, or some other time? Famously, Peter Singer has argued “that since no fetus is a person no fetus has the same claim to life as a person” (Writings on an Ethical Life, 160). On this point, he and I are in agreement: fetuses are not self-conscious, cannot engage in self-creation, and are not bearers of dignity.
But Singer goes much farther: “Now it must be admitted that these arguments apply to the newborn baby as much as to the fetus. A week-old baby is not a rational and self-conscious being, and there are many nonhuman animals whose rationality, self-consciousness, awareness, capacity to feel, and so on, exceed that of a human baby a week or a month old. If the fetus does not have the same claim to life as a person, it appears that the newborn baby does not either” (Ibid.). The reason, on my reading, that Singer goes too far with his suggestion about the permissibility of infanticide is that he puts too much weight on the psychological aspect of the human mind and not enough on the biological.
It might well be the case that we who are persons do not have strong psychological connections to the infants we were, but – as yet – we aren’t certain. We know, however, that healthy infants’ brains display organized cortical brain activity (OCBA) and, David Boonin argues, we can measure both the beginning and ending of this “electrical activity in the cerebral cortex of the sort that produces recognizable EEG readings” (A Defense of Abortion, 115).Given that, Boonin’s argument for using OCBA as the standard by which to judge whether a fetus is a person makes a good deal of sense. If OCBA is not present, we would be hard pressed to make a case for the self-creative feature of the human mind about which I’ve already said so much. For the cerebral cortex must be working in a organized manner before anyone can claim that the brain has created the sense of self that is the key feature of personhood.
If we are drawing lines – and with questions of birth and death it often appears that we must – then the line should be drawn at the earliest stage possible. With regard to self-consciousness and dignity, it seems to me that Boonin’s line allows much less room for error than Singer’s. Although it might very well be the case that selfhood (as we understand it) begins in infancy – and with it, dignity and personhood – Boonin suggests that we draw the line at the 25th week of pregnancy; the reason is that there is “ample evidence to suggest that [OCBA begins] to occur sometime between the 25th and 32nd week” (Ibid.).
We might push the line back a bit, however, and adopt an even more conservative estimate about OCBA by drawing the line at 20 weeks; as Boonin concedes, “Burgess and Tawia identify 20 weeks of gestation as ‘the most conservative location we could plausibly advocate’ as the beginning of what they call ‘cortical birth,’ because it is at this point that ‘the first “puddle” of cortical electrical activity’ of an ‘extremely rudimentary nature’ begins to appear in brief spurts” (128). Adopting this position – rather than Singer’s – would be to argue for a fetal right to life at the 20th week of pregnancy (the earliest time at which it is possible for OCBA to occur) and, of course, to prohibit things like infanticide.
This is, of course, a somewhat radical position, as it suggests that the ruling in Roe v. Wade – already controversial enough – needs to be reconsidered in favor of limiting some abortions. While many would argue that redrawing this line is wildly problematic, those who would most feel the effect of doing so are those who suggest that fetuses are persons with rights from the moment of conception, for Boonin notes that “even if we push back the gray area from 25 weeks to 20 weeks, it will still turn out that 99 percent of abortions take place before the fetus acquires a right to life” (Ibid.). In the end, tying the permissibility of abortion to the absence of organized cortical brain activity seems to have a limited effect on public policy and squares a difficult issue with the nonreligious understanding of personhood I advance in my book.
 This does, however, affect that notion – drawn from the ruling in Planned Parenthood of Pennsylvania v. Casey – that viability is an important moment to consider in the life of a fetus. As William Cooney suggests – in “The Fallacy of All Person-Denying Arguments for Abortion,” 8 Journal of Applied Philosophy 2 (1991) – it is not: “Does a 5-month-old fetus then become a person when that stage of technology exists? Can personhood be a condition relative to and dependent on technology?” (161). If technology were to allow for earlier viability, this would not change the facts about personhood because a viable pre-OCBA fetus lacks a sense of self and, consequently, dignity and rights.
Several thoughtful commenters have asked me to say more about human personhood and human dignity after yesterday’s post on Rand Paul’s argument against abortion on the grounds that human life begins at conception.
As I argued there, the fact that human life begins at conception doesn’t actually do any heavy lifting with regard to questions about human personhood or rights. Being a person means more than simply being alive. Think, for example, of the patient in the hospital whose cerebrum is fundamentally injured. The continued existence of the patient is not open to question: so long as she is breathing and her heart is pumping — functions that are regulated by the brainstem rather than the cererum — she is living.
At issue, though, is that the person who existed before the traumatic brain injury is now no longer in existence. All the things that made the patient who she was have left the body of the patient. These things are far more integral to our coneption of personhood — and of life itself — than the mere animal functioning of brainstem, heart, and lungs (which can be duplicated by machine). What cannot be duplicated or replaced is the sense of self, the “I” that I argue makes us persons and from which human dignity, the source of our human rights, is derived.
I don’t want to suggest that we achieve dignity through rational thought or action, i.e., that we earn our dignity in the way that Kant suggests; instead, my argument is that dignity arises from our higher brain function. In particular, dignity is a function of our self-consciousness, our ability to talk and think about ourselves.
The Greek δόξα, from which dignity is derived, is defined as “the opinion which others have of one, estimation, repute.” While this ancient concept was thought to rely on the way we were perceived by others, I want to argue that of far greater importance is the opinion we have of ourselves and, in particular, the stories we tell about ourselves. My dignity is bound up with my answer to the most fundamental identity question, “Who am I? [which] will normally address what is most salient in one’s sense of self.” This narrative identity, David DeGrazia notes, “involves our self-conceptions, our sense of what is most important to who we are.” Bound up with my narrative identity is the sense that I can make something of myself; it is the ability to posit a future that I have a hand in shaping (which can be traced back at least as far as Nietzsche and has been updated by contemporary theorists like Ronald Dworkin and Richard Rorty). DeGrazia puts this especially cogently: “Much of what matters (to most of us, anyway) is our continuing existence as persons—beings with the capacity for complex forms of consciousness—with unfolding self-narratives and, if possible, success in self-creation.”
Ultimately, then, I argue that personhood and dignity are bound up together, that one cannot be a human person without the ability — derived from organized cortical brain activity — to feel as though there is a “I” in the center of one’s brain, pulling levers and adjusting dials (even though we know that, in fact, this is simply an evolutionary strategy developed by our genes to make ours brains better, more clever ones). This “I” amounts to a feeling of selfhood that, finally, accounts for our having dignity and being persons. As I conclude in my book, “It is, in my estimation, the feature that separates human persons from human animals and, so far as we know, from all other animals.”
Though the patient with the traumatic brain injury and the person she was before the injury are the same biological animal, the person died when her cerebral cortex, the self-creating part of her brain, stopped functioning. The patient with the traumatic brain injury is no longer a rights-bearing person because the patient does not possess the equipment necessary for personhood and dignity. The same is obviously true of the blastocyst, insofar as it’s simply a ball of cells and has no brain whatsoever.
In the end, I think human life alone is not enough to provide us with rights, that a heartbeat — which can be accomplished entirely by machines — doesn’t require governmental action on my behalf. Indeed, in the cases at issue here, the idea of “my” in “my behalf” doesn’t really have any meaning, as without higher brain function, I cannot conceive of myself at all. That’s why I argue that our rights hinge not simply on our bodily functions but on our dignity. Certain fetuses, on my reading, cannot properly be understood to be bearers of dignity and are thus not the bearers of rights.
While I have no doubt that some people will want to suggest problems with this argument — and I look forward to hearing them! — I think it’s a much stronger position than the one put forward by people like Rand Paul, Paul Ryan, or my thoughtful commenters. First of all, it contains an explanation about why human persons have special rights that require governmental protection while other living animals do not. Secondly, it provides us with the measuring tool of higher brain function — which ensoulment clearly does not provide — for making decisions that would potentially infringe on the rights of women. And, finally, it keeps religious belief away from a heated public policy debate, ensuring that people who believe that blastocysts are the beloved children of God are entitled to that belief but are not entitled to enforce it on anyone else.
 Henry George Liddell and Robert Scott, A Greek-English Lexicon, revised and augmented by Sir Henry Stuart Jones with the assistance of Roderick McKenzie (Oxford: Oxford University Press, 1969), 444.
 David DeGrazia, “Identity, Killing, and the Boundaries of Our Existence,” Philosophy and Public Affairs 31(4) (Fall 2003), 423.
 Ibid., 424.
I’ve seen a lot of criticism of Senator Rob Portman over the past twenty-four hours, from both the Right and the Left.
The former I suppose I understand, even though I think the principled position behind it doesn’t resonate with me in any way and is a terrible, terrible mistake. The criticism from the Left, however, really needs some examination.
The suggestion behind this criticism is that Portman is just one more privileged white guy who only came around on the issue of same-sex marriage because it personally affected him. But of course he is.
Many people on the Left reacted cynically to Portman’s announcement because their position is that the people should embrace same-sex marriage because it’s morally right and because all human beings are fundamentally the same, not because individuals personally know and like someone who is gay and who therefore suffers from discrimination.
But that’s not really a critique of Portman or his change of heart on the question of same-sex marriage.
As Richard Rorty argues in Truth and Progress:
To get whites to be nicer to blacks, males to females, Serbs to Muslims, or straights to gays … it is of no use whatever to say, with Kant: notice that what you have in common, your humanity, is more important than these trivial differences. For the people we are trying to convince … are offended by the suggestion that they treat people whom they do not think of as human as if they were human (178).
This sounds pretty awful, to be sure. And that’s why it might feel good to criticize Portman’s announcement that his personal experience has led to a change of heart: He should have come to this realization sooner and without needing inequality to affect him personally.
As Rorty notes, “We resent the idea that we shall have to wait for the strong to turn their piggy little eyes to the suffering of the weak, slowly open their dried-up little hearts” (182). But this, Rorty tells us, is the best we can hope for and, he argues, might achieve its end more quickly than we anticipate: “These two centuries are most easily understood…as a period…in which there occurred an astonishingly rapid progress of sentiments” (185).
How has the progress of sentiments occurred and what can we do to extend its reach? On this, it will be helpful to quote Rorty at some length, from Contingency, Irony, and Solidarity:
The right way to take the slogan ‘We have obligations to human beings simply as such’ is as a means of reminding ourselves to keep trying to expand our sense of ‘us’ as far as we can. That slogan urges us to extrapolate further in the direction set by certain events in the past – the inclusion among ‘us’ of the family in the next cave, then of the tribe across the river, then of the tribal confederation beyond the mountains, then of the unbelievers beyond the seas (and, perhaps last of all, of the menials who, all this time, have been doing our dirty work). This is a process which we should try to keep going. We should stay on the lookout for marginalized people – people who we still instinctively think of as ‘they’ rather than ‘us.’ We should try to notice our similarities with them. The right way to construe the slogan is as urging us to create a more expansive sense of solidarity than we presently have (196).
The way to accomplish this progress of sentiments, this expanding of our sense of solidarity, is by telling “the sort of long, sad, sentimental story that begins, ‘Because this is what it is like to be in her situation – to be far from home, among strangers,’ or ‘Because she might become your daughter-in-law,’ or ‘Because her mother would grieve for her’” (Truth, 185). Telling these sorts of stories, he argues, is the most practical method for increasing our sense of solidarity with those we once considered ‘others.’
In other words, the best way to convince the powerful that their way of thinking about others needs to evolve is to show them the ways in which individuals they consider to be ‘Other’ are, in fact, much more closely akin to them than they ever realized. It is, in short, to create a greater solidarity between the powerful and the weak based on personal identification.
Rob Portman’s change of heart is a good example of the way in which we ultimately achieve a progress of sentiments that leads to the equal treatment of more and more people. Viewed in this way, it’s really not something people on the Left ought to be criticizing; it’s something we should be working to encourage for those without the sort of immediate personal connection that Portman fortunately had.
In his piece at the New Yorker, Teju Cole laments that something terrible seems to have happened to President Obama, our country’s “reader-in-chief”:
The recently leaked Department of Justice white paper indicating guidelines for the President’s assassination of his fellow Americans has shone a spotlight on these “dirty wars” (as the journalist Jeremy Scahill rightly calls them in his documentary film and book of the same title). The plain fact is that our leaders have been killing at will.
How on earth did this happen to the reader in chief? What became of literature’s vaunted power to inspire empathy? Why was the candidate Obama, in word and in deed, so radically different from the President he became? In Andrei Tarkovsky’s eerie 1979 masterpiece, “Stalker,” the landscape called the Zona has the power to grant people’s deepest wishes, but it can also derange those who traverse it. I wonder if the Presidency is like that: a psychoactive landscape that can madden whomever walks into it, be he inarticulate and incurious, or literary and cosmopolitan.
The idea that the reading of literature is somehow intrinsically ennobling is something I have been fighting against for a long time, but people always find this strange, and invariably, when I have popped off on this subject, someone says “Well, why are you a literature professor, then?”
I could simply say that I find literature immensely interesting both because of its aesthetic qualities and because of the insights it yields into the cultures from which it arises. And that would be enough. But in fact I do believe that literature can have a significant role in a person’s moral and even spiritual development: it just is highly unlikely to have a leading role. It has an ancillary role in character formation: what readers can get from literature largely depends on other, more powerful forces.
For my own part, I find myself agreeing with Cole — and through him with some of the excellent authors he cites — about the transformative potential of literature. As someone who teaches human rights and great works of literature for a living, I have a vested interest in the argument; I very much want it to be the case that literature can be transformative for most of us … even if the upper eschelons of power somehow manage to undo much of the great work that reading can do.
Note, though, that I used the word “potential” in the above paragraph. It isn’t necessarily the case that reading great works of literature will expand one’s moral imagination or that, once expanded, one’s moral imagination will rule the day. In this sense, Jacobs has a point. One could read literature and be inspired to care about others … but only to a point.
This is where my reading of the philosopher Richard Rorty comes in. In Contingency, Irony, and Solidarity, Rorty writes:
Fiction like that of Dickens, Olive Schreiner, or Richard Wright gives us the details about kinds of suffering being endured by people to whom we had previously not attended. Fiction like that of Choderlos de Laclos, Henry James, or Nabokov gives us the details about what sorts of cruelty we ourselves are capable of, and thereby lets us redescribe ourselves.
Rorty here is describing his ideal type, the liberal ironist, who benefits from reading great and challenging works of literature because it enables her to gather as much information as possible about the suffering of others and about the language in which they express their beliefs, fears, and highest hopes. The liberal ironist is an ideal because she not only “faces up to the contingency of … her own most central beliefs and desires [but also] include[s] among these ungroundable desires [her] own hope that suffering will be diminished, that the humiliation of human beings by other human beings may cease” (xv).
The trouble for President Obama, for Cole, and for me is that we are liberals, insofar as we care about minimizing the suffering of others, but we are not ironists, at least not publicly. Indeed, Rorty’s ideal of liberal irony is fundamentally a private one rather than a public one; he writes, “I cannot go on to claim that there could or ought to be a cuture whose public rhetoric is ironist. I cannot imagine a culture which socialized its youth in such a way as to make them continually dubious about their own process of socialization” (87).
As Cole notes, “Any President’s gravest responsibilities are defending the Constitution and keeping the country safe.” He goes on to ask, “What makes certain Somali, Pakistani, Yemeni, and American people of so little account that even after killing them, the United States disavows all knowledge of their deaths?” The answer, of course, is that these people are perceived as threats to American citizens and to the United States, as standing directly in the path of “keeping the country safe.” It’s the point at which our private ironism that allows us to see the problems with this way of thinking runs headlong into the necessity for us to be publicly unironic about things like security and thus to think of ourselves different from those who might harm us.
Of course, we don’t know whether or not President Obama is at war with himself about these drones strikes, but it’s certainly important for me to imagine that he is, that he is deeply disturbed by them or at the very least that he doesn’t undertake them lightly. This allows me to cling to the image of Obama as our deeply-conflicted reader-in-chief, someone who cares about the suffering of others because he has read “the sort of long, sad, sentimental story that begins, ‘Because this is what it is like to be in her situation – to be far from home, among strangers,’ or ‘Because she might become your daughter-in-law,’ or ‘Because her mother would grieve for her’” (185).
Whether or not Obama is conflicted, ultimately, doesn’t much matter for the people whose deaths he has ordered or for those who were merely nearby. But I think it does matter a great deal for us. This isn’t, after all, really a debate about the transformative potential of literature; it’s a debate about our public beliefs and opinions with regard to the suffering of those who are different from us and who might (but also might not) threaten us in some way. We must ask ourselves, how we will treat those people and how our thoughts on the matter reflect our understanding of ourselves as political liberals.
Since when do people think they have an inalienable human right to be vigilantes?
I understand that people want to feel safe and believe that having a gun in the home will enable them to defend themselves. And I understand that acting in one’s self-defense is a legitimate legal defense. But using the language of self-defense to defend oneself in the (rare) case of shooting an assailant is not the same thing as asserting a human right to defend oneself.
To be sure, if we read a foundational text like John Locke’s Second Treatise of Government, we find a natural right to punish anyone who would harm us in our life, liberty, health, or possessions. In the state of nature, Locke tells us, each person is effectively judge, jury, and executioner unto herself. And, of course, it’s precisely the problem of a lack of independent judgment in the state of nature that leads people to join together to form a political community.
But for people to establish a political community, Locke asserts that people must give up to the government their natural right to punish criminal behavior and agree to have the government settle grievances. This is why we have standing laws that are meant to be applied equally by independent officers of the law and by the courts.
So, again, where is all of this talk of self-defense and vigilantism coming from?
The ridiculous and offensive meme that’s been making its way around Facebook and Twitter for the past few days, linking the struggle for civil rights in the 1960s and gun ownership today, is being grounded in the language of human rights.
Insofar as they are both expressions of fundamental human rights, yes, they are the same thing.
The civil rights leaders of the Jim-Crow-era South knew that having a gun was the only thing that would protect you from the KKK, because the local police wouldn’t. Even MLK himself had guns; one of his advisors described his home as “an arsenal”.
Gun Rights Are Civil Rights.
The amazing thing about a response like this — and there are lots of these sorts of posts all over the internet right now — is just how amazingly wrong it is, both in terms of the way we think about rights and in terms of historical accuracy.
Let’s start with human rights:
First of all, there is no human right to gun ownership. The Second Amendment to the U.S. Constitution states:
A well regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.
Leaving aside the intentions of the Founders, as I’ve already written about this extensively, we can clearly see that there is a right to keep and bear arms afforded by the American Bill of Rights. But this right, like all of the rights enumerated in the Bill of Rights, has never been understood to mean that citizens have the right to own any weapons they want or that there should be no barriers at all to gun ownership. We regulate a citizen’s ability to procure weapons today without any constitutional problem and we restrict the types of weapons that are available for sale to the public today; it would be difficult to make an argument that further regulation or restriction would somehow be ruled unconstitutional.
Further, no such right to gun ownership appears in the Universal Declaration of Human Rights. The closest we get is Article 12:
No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
This language seems to refer both to governmental and private interference: The government cannot arbitrarily interfere with a citizen in these ways and, further, the UDHR notes quite clearly that the government and its laws exist to protect citizens equally against private attacks. Nowhere does the UDHR suggest that citizens have the right to arm themselves to protect themselves against criminals or that armed insurrection is the way to deal with the threat of a government that might become repressive.
If we go all the way back to John Locke’s argument about natural rights and resisting tyranny in his Second Treatise of Government, we find that in Chapters 18 and 19 he recommends a system in which the people can change their leaders with relative ease, thus allowing them to withdraw their support from a potentially repressive government and to replace it with a better one rather than resorting to rebellion and armed insurrection (which would bring with it a whole host of evils).
Now, a few words about American history:
Did Martin Luther King, Jr. own guns? You bet he did.
William Worthy, a journalist who covered the Southern Christian Leadership Conference, reported that once, during a visit to King’s parsonage, he went to sit down on an armchair in the living room and, to his surprise, almost sat on a loaded gun. Glenn Smiley, an adviser to King, described King’s home as “an arsenal.”
This was in the mid-1950s in Alabama, when King and his family were the targets of constant, credible, specific death threats; after his home was bombed, King even applied for a concealed carry permit (and was rejected).
There’s not a lot of discussion of the reason that King owned guns. Gun advocates would like us to believe that King owned them solely because he had the right to do so or because he liked having them around, not because he lived under the sort of constant threat that exists today only in the minds of wingnuts who think our tyrannical government is out to get us. What’s more, here’s the most important part of the MLK story that gun advocates aren’t quite so interested in discussing:
Eventually, King gave up any hope of armed self-defense and embraced nonviolence more completely.
So, gun advocates, if you want to emulate Dr. King, then you’d better start working to highlight for us the constant, credible, and specific threats against you that require you to own guns for your personal protection. And then, of course, I’ll remind you that you’re not really emulating Dr. King, since he later gave up on the gun and embraced non-violence. Are you ready to do that? Why not? Worried that someone is going to bomb your house or assassinate you?
Of course, I’m not actually in favor of taking all of the wingnuts’ guns away from them. As I noted above, I think the Second Amendment pretty clearly affords Americans the right to keep and bears arms and I don’t think we’ll be doing away with the Second Amendment any time soon. I am, though, in favor of further regulations and of further limiting the kinds of weapons that citizens can own. And that’s why the civil rights meme going around is so foolish: It attempts to establish some sort of right to an assault rifle based on either a human or civil right to own every possible kind of gun (which is obviously false) or to defend oneself. But even if citizens had a human right to self-defense, which they don’t, such a right wouldn’t establish the right to own an assault rifle since those are terrible weapons with which to defend oneself.