Ricky Gervais and the Wrong Way to Grieve

After Life, created by Ricky Gervais, seems to be a quest to show just what it would mean to grieve in the wrong way. While grief counselors and well-meaning supporters will often assure us there is “no right or wrong way to grieve,” the central character, Tony, is destined to be an exemplar for how badly things can go when someone takes that advice to heart.

Tony recently lost his wife along with his will to live. Even without a will to live, though, he keeps living in spite of himself, partly because the dog needs to be fed. Maybe he really does feel an obligation to the dog, or he really wants to live, or he is just afraid to die. It doesn’t really matter why he keeps living, maybe, but several characters do make note of the fact that he does, in fact, find a reason to go on each day, even if he can’t say what it is.

So he goes on without wanting to live, which he feels gives him the freedom to do things he never would have done before. Of course, he always had the same freedom, but his suicidal ideation has now made him aware of it. The fact that suicide is on his mind tells him that if something he does causes things to get even more unpleasant for him, he will simply end it all.

This is, of course, a central tenet of existentialism, especially as articulated by Jean-Paul Sartre. Humans have radical freedom to choose their actions because they can annihilate themselves at any time. This annihilation can come in the form of suicide or simply choosing to become a different person. Sure, you can’t actually become a different person, but you can choose radically different actions, and we are defined by what we do.

Suicide is also the central question for another existentialist, Albert Camus, of course, but for Camus the question of suicide should challenge us to find meaning for our lives each and every day. If I’ve chosen not to kill myself today, I must have a reason. I should be aware of what it is I am living for. If it is just to feed the dog, then so be it.

But Tony isn’t so far along his journey yet. He’s engaged in a little game theory such as that discussed by Robert Nozick and other philosophers. He’s decided that being a decent person isn’t a good bet in the game of life. While it would be better if everyone were nice, that is not the case. Consequently, nice people consistently lose ground to the selfish people around them. Tony reasons it is better to be a rotten person benefiting from the kindness of a few naïve but altruistic people than to be a nice person expending energy on people and getting nothing in return.

So Tony is pretty awful to everyone around him. I don’t think there is any need for a spoiler alert here as this is all laid out in the first minutes of the first episode. Tony does some awful things that have awful consequences for people who come into his contact. Brief flashes of remorse or regret let us know an empathetic individual still lurks in there somewhere, but people risk real harm by coming into contact with Tony.

In the Parable of the Mustard Seed, Buddha tells the grieving Kisa Gotami to go to all her fellow villagers and collect a mustard seed from everyone not touched by grief. She returns empty handed, of course, as everyone is touched by grief. Like Kisa Gotami, Tony slowly learns this lesson, and it changes him.

In the end, though, I think existentialism drives the series more than Buddhism, but it is Simone de Beauvoir who gets the final say. Beauvoir believed, as did the other existentialists, that to be human is to be free if we constantly practice freedom as an act of will as Tony has decided to do. However, as we will ourselves to be free we must also recognize the freedom of others and will them to be free as well.

We must all suffer, but our suffering is shared by all those around us as both Kisa Gotami and Tony learn. Recognizing that means we will move forward with compassion and kindness, and that is the greatest freedom there is.

Doing Philosophy for Fun or Profit

I was recently invited to think about answering two questions: 1. What is philosophy? 2. How is philosophy done? Teaching first-year community college students for 17 years gave me ready answers, of course. Philosophy is a love of wisdom inspired by a sense of wonder about the world. Philosophy is an activity, not a study. It is a way of engaging with the world critically, not accepting things simply as they appear to be, and it is expanding the imagination to ask broader and deeper questions about reality.

These answers aren’t too bad for first-year students hearing of philosophy for the first time, but they seem fairly shallow for older adults who have already lived examined Despairlives and have also read the works of some of history’s most famous philosophers. A second approach might be to think of the work of professional philosophers working the field at the moment, some engaging in work so arcane and distant from everyday life that I wouldn’t even begin to know how to describe them.

Still, we do have public intellectuals who engage with social issues and try to help us navigate how to live just and meaningful lives. Kwame Anthony Appiah and Martha Nussbaum come to mind. Another group of philosophers are trying to answer basic questions about both consciousness and morality through experimentation—Joshua Green, for example. And some philosophers are doing their best to use an expansive and critical approach to science of the mind to develop a coherent philosophy of mind to explain what it means to be conscious at all (Patricia Churchland, for example).

But none of this answers either of the questions that sent me down this path. Most first-year philosophy students in the United States learn that Socrates is considered the father of philosophy—despite the fact that philosophers certainly existed before him. Nonetheless, Socrates is credited with establishing the foundations of philosophy by developing the practice of refutation. In this method, possible conjectures about the truth are offered, though not by Socrates, and then examined for possible flaws. Socrates, it would seem, was good at finding the flaws and refuting the conjectures of others, which made him quite unpopular in some circles.

It is worth noting that coming up with those conjectures in the first place might be an important function of philosophy, but refutation became cemented in our minds as a sort of negative function of philosophy. It doesn’t really give us answers to what our own existence is, but does tell us what it is not. The process of refutation invites a competition that can be demoralising to the person whose theories are being refuted. Some female philosophers have opined that this negative approach to philosophy is exactly the thing men would come up with. Women, they say, would use more collaborative approaches, which may be true—at least for some women. Many female philosophers have shown both the willingness and capability to engage in refutation with fervour. Christine de Pisan was refuting hither and yon in the 14th century.

Regardless of the importance of refutation, philosophy does seem to involve an ongoing conversation. Though philosophers often claim to lock themselves into a state of solitude (just look at Descartes for example), they rely just as much on interaction with other philosophers (see Descartes’ objections and replies). So, the proper method of philosophy must involve engagement, whether collaborative and constructive or competitive and destructive. So, philosophy is a kind of conversation with testing, challenging, and, one hopes, some degree of support—and maybe a little experimentation with fMRI’s and things of that nature.

And to what end do philosophers engage in this conversation? Is it to generate questions, generate answers, or to live a good life. Socrates must have believed that the practice of philosophy would help develop a good life, or he would not have declared so forcefully that the unexamined life is not worth living. Of course, not everyone agrees that the examined life is worth living, either, but maybe that is the kind of question philosophy can help answer.

Bertrand Russell offered a pretty convincing argument that philosophical questions can’t be answered because the ones that can be answered are scientific questions. From time to time technology and scientific experimentation move some questions from the realm of philosophy to the realm of science. In such cases, philosophers might offer a hand in interpreting the answers to such questions, which doesn’t seem like the grandest aspiration for philosophy—helping to interpret scientific findings.

I also don’t know that generating questions should be the ultimate goal of philosophy, either, but it is one I enjoy. I always used to promise my students that while other subjects would answer their questions about the given subject, philosophy would make them question the answers they already had and open up a slew of new questions. I once had a student challenge me and say that he was pretty sure everyone in the class knew what a human was and he couldn’t believe anyone would waste time asking about it. After asking a few questions about at what stage in mental deterioration one loses the rights they had as a functioning human, he agreed that the question did have important implications and could be difficult to answer. As promised, I failed to give him any clear answers to the question of what a human is, but I did give him more questions than he had expected.

I do think my life is better for the time I’ve spent engaging with philosophy and philosophers. If nothing else, philosophy has made me less sure of myself, and I think the world would be better if more people were less sure of themselves. Unfortunately, telling people they don’t know the answers to questions that pop up in everyday life is not always met with gratitude or praise. Socrates would agree.

Hagiography

In the Halls of Knowledge

The Great Men shared their wisdom

With emperors, kings, monarchs, and generals.IMG_2596

Great women shared their insights and guidance, too,

But their words are stored in different wings of the Great Hall.

It was the Great Men who laid the foundations

For civilisation, for democracy, for tyranny,

Architectural planning, sewage, and war.

It was the Great Men who failed to save humanity

From the thirst for destruction men can never quench.

Some warned against aggression and greed,

Others advised on the proper path to power,

But the Final Solution was always one fault away.

These hoary gentlemen appear to watch over us,

But their stony eyes have no more sight,

Than the once active brains that planned

A future of deprivation and conflict.

They’ve let us down for three-thousand years, now,

But we keep returning to the font for another drink.

Surely this time Confucius will save us,

Or perhaps Seneca’s sagacity won’t be ignored.

Maybe Erasmus can calm the passions of the commoners.

 

I will smash the stone feet of those assumed sophic.

Their dead eyes, long blind, offer me no vision.

Their petty squabbles resolve no crisis.

Let them rot and roil the dead with their mendacity.

Let them be forgotten for giving us false hope

That we might see a brighter future.

Let their names be trammeled underfoot

As we race to our annihilation.

They should have seen it would be the only resolution.

Review: John Gluck’s Voracious Science and Vulnerable Animals

The entire medical research enterprise is built on a foundation of intense and immense animal suffering. Most of the effective treatments we have now were previously tested on non-human animals before they were ever used on humans. On the other hand, most non-human animal research does not lead to an effective treatment or even publishable results.

In Voracious Science and Vulnerable Animals: A Primate Scientist’s Ethical Journey, John Gluck describes his glacially slow transition from primate researcher to animal welfare advocate. Early in his career, Gluck worked on the infamous monkey social-isolation experiments that provided the earth-shattering news that separating infants from their mothers and rearing them in isolation harms their emotional and intellectual development. Thanks to img_0327this ground-breaking research, mothers have learned not to raise their babies in small wire cages and occasionally perform painful surgeries on them.

In approximately the same amount of time it took for humans to evolve from other species, Gluck began to realize the great harm he was causing to his beloved monkeys. Gluck apprehended the harm he was doing after personally observing the excruciating suffering of the animals he was studying, seeing the shock in the eyes of non-scientists when he described his work and realizing that he could only describe his work to fellow scientists, having a student present him with Peter Singer’s accurate description of his work, having his lab broken into by animal rights activist, and, finally, talking with philosophers about the rights of animals.

The brilliance of his account is that he illustrates why it was so difficult for him to acknowledge the pain he was causing and why it is next to impossible to engage animal researchers in a debate over the welfare of research animals. Typically, animal researchers say they turn to non-human animals when it would be unethical to test on humans. When pressed, they will agree that animals should be used only when their use benefits the pursuit of scientific knowledge, should be given clean living quarters, should be fed appropriately, and should be given medical treatment when needed. Unless, of course, the scientist is studying the effects of food deprivation, lack of medical treatment, and so on.

The research is further justified by the fact that non-human animals have similar biological and neurological structures that ensure that results in non-human animals can be replicated in human animals. The human who doubts the similarity is scoffed at for being scientifically illiterate. Paradoxically, suggesting that non-human animals, similar to humans in other ways, are also similar to humans in terms of suffering or moral importance is accused of anthropomorphism. The argument is either that animals are not capable of suffering in any meaningful way or that their suffering is of no moral significance.

Gluck describes these arguments and explains that he himself held such seemingly contradictory views because they are taught and repeated ad nauseam until they become ingrained beginning with undergraduate study. Anyone who questions these basic beliefs is either met with laughter or denied entry and participation in research programs. People within the system become so closed off from contrary opinions that they are often surprised when descriptions of their work shocks and offends outsiders. The only explanation for the outrage many scientists will consider is that outsiders cannot understand the importance of their work.

One of the more fascinating events that led to Gluck’s change of heart concerned a human patient who was thought to be severely cognitively impaired. Staff in the patient’s room talked about the woman as if she were an object. Gluck was trying to solve a particular problem. At times, staff could feed the woman from a spoon but at other times she could not swallow. It turned out that she could swallow but was refusing to because she did not appreciate the way certain staff treated her. It was the only form of protest she had at her disposal. When Gluck realized how robust the conscious life of this patient was despite the appearance of minimal cognitive activity, he realized also that he could not say with certainty what thoughts, beliefs, or emotions non-human animals might experience.

Gluck eventually decided to get out of animal research and began teaching courses on research ethics that covered a variety of topics but included discussions of animal welfare. (If you care about the suffering of the animals in his lab, you will be disappointed by what happened to them.) Gluck’s educational programs on research ethics were successful in the sense that they attracted students from a myriad of disciplines and engaged both students and faculty in interesting and enlightening debate on the use of both human and non-human animals in research. Looking back, he is proud of his accomplishment to begin these discussions but admits that animal researchers were the one group that never engaged in the discussions.

Ethicists can attempt to change practices from inside or outside of institutions. Outsider ethicists have more freedom to make bold declarations of misconduct, express outrage, and threaten established practices. Insider ethicists have greater access and opportunity to speak directly with the people who have the power to change practices. Both kinds of ethicists are needed. Gluck is an insider whose thoughts and arguments were enhanced and supported by outsider ethicists. He says he was unable to effect a great deal of change inside research labs, but he was able to speak to researchers as an equal to engage in an ethical discussion. Sadly, insider ethicists who raise ethical alarms are often forced outside. It takes a great deal of courage to risk losing a privileged position inside the castle, and it also takes a great deal of courage to storm the castle gates.

If you are looking for a book with a detailed and comprehensive review of philosophical theory related to animals, you will be disappointed in Voracious Science and Vulnerable Animals; however, if you are looking for an insider’s perspective on the views and outlook of animal researchers, you will find Gluck’s insights and introspection fascinating, even if depressing. The book shows that it possible for researchers to be moved and gain compassion and understanding of the harm they are doing, but it also shows that such progress is slow and infrequent.

 

 

 

 

Review: Martha Nussbaum on Anger, Apologies, and Forgiveness

Over the years, I’ve spent a considerable amount of time discussing anger, apologies, and forgiveness with therapists and survivors of child abuse and other traumas. Survivors and therapists alike are often passionate in the their belief that forgiveness is the only way to move forward from traumatic abuse. Without forgiveness, they feel, healing is impossible.

Having a typically transactional view of forgiveness, I always held that it makes no sense to forgive when there is no acknowledgment of wrongdoing on the part of the abuser. Asking a survivor to forgive unilaterally and unconditionally is bereft of meaning at best and morally repugnant at worst. Only if the abuser were to apologize and make some effort at amends, at least, could I see then extending forgiveness to the abuser, and I would consider this a charitable act on the part of the survivor.

Others have hastened to tell me that such an exchange is not necessary. They insist that unconditional forgiveness, freely given, is more meaningful and more liberating to survivors than the transactional form of forgiveness. Besides, they say, forgiveness is cleansing and is, indeed, the only way for survivors to rid themselves of the burden of intense and destructive anger.

I have always countered that it is possible to put anger aside without offering forgiveness to someone undeserving and unrepentant. Choosing a somewhat less emotional and inflammatory example, I can point out that I once had a moderately expensive lawnmower stolen from me. It wasn’t the end of the world, but it certainly made me angry. The thief was not caught and, I assume, never suffered any pangs of guilt for the crime. Over time, I was able to get on with my life, though I still remember it 30 years later. I decided to stop dwelling on it and get over it, so I tried to stop thinking about it and focus on things that could improve my life.

My interlocutors quickly countered that losing a lawnmower is nothing like the pain of having your innocence robbed (some described it as theft of a child’s “soul”). I am quick to agree, but I see it as a difference in degree, not kind, and I still cannot see how offering forgiveness to a remorseless abuser can aid healing.

My view was bolstered by the work and words of Alice Miller, the famed psychoanalyst and child advocate who died in 2010. In her 1991 book, Breaking Down the Wall of Silence, Miller writes, “Forgiving has negative consequences, not only for the individual, but for society at large, because it means disguising destructive opinions and attitudes, and involves drawing a curtain across reality so that we cannot see what is taking place behind it.” Instead, she tells us, “Survivors of mistreatment need to discover their own truth if they are to free themselves of its consequences. The effort spent on the work of forgiveness leads them away from this truth.”

Martha Nussbaum’s new book, Anger and Forgiveness: Resentment, Generosity, Justice, offers a

Martha_Nussbaum_wikipedia_10-10
By Robin Holland – Photo file provided by Robin Holland

third way of viewing anger and forgiveness. Nussbaum agrees that therapists should not force forgiveness, but she offers a more nuanced and philosophically grounded way of viewing the work of anger and the way forward from even extreme wrongs and injustices.

While many philosophers have ignored or dismissed the moral relevance of the emotions, others such as Aristotle have noted the importance of anger to a good life. While anger is a negative emotion, it has benefits for people seeking to flourish in life. Namely, anger is said to enable us to recognize injustice when it occurs and then motivate us to action to correct the wrongs inflicted on innocent parties. For Aristotle, anger occurs when someone’s status is lowered without good cause. Indeed, an attack on one’s character or social rank is likely to provoke anger and, in many cases, a wish for revenge. Nussbaum notes that revenge has few or no practical or moral benefits. Other than a temporary sense of satisfaction, payback accomplishes nothing of importance for us.

But if payback isn’t a useful result of anger, then perhaps contrition, apology, and forgiveness are necessary components of a moral and flourishing life. Most of us have grown up in a culture that stresses the importance of apologies and forgiveness for wrongs. Nussbaum traces ancient Jewish and Christian (primarily) texts dealing with the role of forgiveness. The most familiar form is transactional—if someone reduces the status of someone else, the perpetrator shows remorse and asks forgiveness. When the wronged party bestows forgiveness, the proper ranking of the parties is restored, and justice, it seems, is served.

Of course, contrition and apologies are not always forthcoming. Sometimes the perpetrator is simply stubborn and sometimes the perpetrator is no longer alive. This is often the case for survivors of child abuse. In the absence of an apology many therapists, as noted above, advise survivors to offer unconditional forgiveness. This kind of forgiveness is said to release the victim from the shackles of anger and enable a flourishing life to happen. Of course, contrarians such as Alice Miller claim this type of forgiveness traps survivors in a life-long lie that destroys them emotionally.

Nussbaum recognizes these challenges and takes a different approach. She offers several examples of people who move forward without offering forgiveness but in a more positive way than Alice Miller would likely think possible. In the example of the Prodigal Son, the son returns to his father to be greeted with open arms. Although the son has behaved quite badly, his father thinks only of the future with his son and not the past (his other son is not quite so ready to embrace his wayward brother). It is the focus on the future that makes all the difference for Nussbaum.

In an even more painful and poignant example, she describes a father from Philip Roth’s American Pastoral, whose daughter becomes an addict and kills several people. The father finds his daughter and realizes he is helpless to change what she has done or her future prospects. He does all that he can do. He loves her and stays with her. Nussbaum says, “There is no apology, and there’s really no question of forgiveness on the agenda, whether conditional or unconditional. There’s just painful unconditional love.”

When anger is useful, Nussbaum says it is useful as a transition from a wrong to a focus on a better future. In the transition, someone would say in anger, “That’s outrageous! Something must be done to prevent this in the future!” Nussbaum applies this model in three realms: the intimate, the middle (public), and the political (social) realm. Simply because of my interest and background, I found her discussion of the intimate realm the most interesting and compelling.

In the middle, or public, realm, I think most of us realize our anger at strangers is rarely helpful. Minor wrongs (e.g., someone cutting in line at the grocery store) are best forgotten as quickly as possible. More serious wrongs are a matter for law enforcement and the court system. Being consumed with anger is only a form of self-torture.

In the political realm, though, anger is said to be a great motivator toward justice, and surely anger has propelled many social movements to success. Again, though, Nussbaum warns that it is easy to get caught up in concern for revenge or payback rather than creating a better world. After great harms, we need to focus on truth and reconciliation, not punishment. Indeed, the most successful social movements have focused on the future and not redressing wrongs.

Nussbaum sees Nelson Mandela as an exemplary role model for looking to the future rather than the past in response to injustice. She says, “Mandela frames the entire question in forward-looking pragmatic terms, as a question of getting the other party to do what you want. He then shows that this task is much more feasible if you can get the other party to work with you rather than against you. Progress is impeded by the other party’s defensiveness and self-protection.”

For years, I have had difficulty clearly delineating exactly what I found problematic with our accepted model of anger and forgiveness. Nussbaum has provided a welcome bit of clarity for a universal yet surprisingly complex human problem. Realistically, we will not be able to let go of useless anger and focus only on transitional anger, but at least we have a better target. When we do succeed it will be because we rely on another human emotion—love.

Diogenes Versus Plato: Who will set you free?

No one can question Plato’s writing and rhetorical abilities. He was a superstar of the ancient world, and the fact that his dialogs have endured for millennia attests to the fact of his beautiful writing. Of course, Bertrand Russell found it ludicrous to praise Plato’s ideas based on the quality of his writing, saying, “That Plato’s Republic should have been admired, on its political side, by decent people, is perhaps the most astonishing example of literary snobbery in all history.” Other famous thinkers of the ancient world weren’t as lucky as Plato; although their reputations survive somewhat through the words of others, we often have no copies of their original works or just a few remaining fragments. It may be that Plato was simply such a great writer that his works were preserved while the works of others were not, or perhaps other factors played a role in which works were saved and which were lost.

According to the biographer of philosophers, Diogenes Laertius, the Cynical philosopher Diogenes of Sinope (no relationship to the biographer), also wrote a number of books.* If he actually did, none survives today. The biography is here. The Cynic is infamous for masturbating in public, going naked, eating in the market, and carrying a lamp around in the middle of the day. As we don’t have the original works of Diogenes, we can’t be sure which of these stories might be true and which are apocryphal as they reflect how others saw him, not necessarily how he presented himself. The lack of surviving texts may be down to Diogenes himself, at least partly. When Hegesias asked todiogenes-800px read some of his writing, he reportedly replied, “You are a simpleton, Hegesias; you do not choose painted figs, but real ones; and yet you pass over the true training and would apply yourself to written rules.”

So, it seems that Diogenes, like Socrates before him, valued face-to-face interaction over the more passive learning that comes from reading. It is worth noting that Diogenes was a student of Antisthenes, who was in turn a student of Socrates. Although Antisthenes was reluctant to accept Diogenes as a student, Diogenes considered Antisthenes, not Plato, to be the true successor to Socrates.

According to Bertrand Russell’s History of Western Philosophy, Antisthenes enjoyed a comfortable and aristocratic life until the death of Socrates. After that, “He would have nothing but simple goodness. He associated with working men, and dressed as one of them. He took to open-air preaching, in a style that the uneducated could understand. All refined philosophy he held to be worthless; what could be known by the plain man.” Also, “There was to be no government, no private property, no marriage, no established religion.”  Diogenes, it would seem, followed the lessons of his teacher to their logical extremes, which lead Plato to describe Diogenes as “Socrates gone mad.”

When studying the history of philosophy, we generally follow the lineage from Socrates to Plato to Aristotle. We could just as easily follow it from Socrates to Antisthenes to Diogenes. With the former approach, we find justification for authoritarian rule over the ignorant unwashed masses constantly threatening the fabric of society. With the latter approach, we find a rejection not only of authority but of all the values that drive the totality of social regulation and empty social status.

It should be no surprise, then, which works were preserved. We know Socrates primarily through the works of Plato, which painted Socrates as a victim of ignorant Athenian leaders who rose to positions of power through a democratic process and not on their own merit. Threatened by the wisdom of Socrates, the thoughtless and insecure leaders sentenced Socrates to death. In response, Plato promised order could be secured under the direction of educated and dispassionate leaders who would tame the rabble, leading from their own realm outside the cave of illusion and delusion. The Cynics, on the other hand, would cause disruption, encouraging the working people to believe that they could take control over their own lives even without the aid of book learning and academic discipline. The Cynics valued reason, but not the well-healed reason of the aristocrats such as Plato and Aristotle.

Further, the Cynics encouraged citizens to question the value of everything that is supposed to motivate the working class. For Plato, workers driven by their appetitive elements would produce more goods in order to receive rewards to satisfy their hungers and desires. Diogenes rejected the value of expensive clothing, food, shelter or anything else, and often lived off what he could get through begging. Having almost no possessions and no desires for any more, how could anyone take control over him or threaten him with anything? When Perdiccas threatened Diogenes with death if he didn’t appear before him, Diogenes reportedly replied, “That is nothing strange, for a scorpion or a tarantula could do as much: you had better threaten me that, if I kept away, you should be very happy.” As Todd Snider said in his song, “Looking for a Job,” “Watch what you say to someone with nothing. It’s almost like having it all.”

Imagine if the working class (note: if you work for money, you are working class) now began to question the value of cars, wide-screen TVs, sports, clothing, and “good” neighborhoods. And if the poor of the world adopted Diogenes’s views on citizenship, who would fight our wars? Diogenes gets credit for coining the word “cosmopolitan,” which is usually taken to mean citizen of the world. People who travel the world, speak more than one language, eat varied cuisine, and are not, to put it simply, provincial, consider themselves cosmopolitan, but this is not what Diogenes meant. Diogenes considered himself a citizen of the universe with no political allegiance and without political rights. He was banished from his home for defacing currency or something, and he was what would now be described as a “man without a country.” Imagine everyone being that way (John Lennon thought it should be easy, if you try).

Examined rationally, as the Cynics would have us do, virtually nothing we hold dear has any intrinsic value. We spend our lives working for trifles while ignoring anything that make us genuinely happy. When Diogenes was told it is a bad thing to live, he said, “Not to live, but to live badly.” We can live well, but we may be thought mad.

* Diogenes Laertius says, “The following books are attributed to [Diogenes of Sinope]. The dialogues entitled the Cephalion; the Icthyas; the Jackdaw; the Leopard; the People of the Athenians; the Republic; one called Moral Art; one on Wealth; one on Love; the Theodorus; the Hypsias; the Aristarchus; one on Death; a volume of Letters; seven Tragedies, the Helen, the Thyestes, the Hercules, the Achilles, the Medea, the Chrysippus, and the Oedippus.”

Sparkle, Autonomy, and the Right to Die

Recently a woman in the UK known only as C won the right to effectively end her life by refusing dialysis treatment. Owen Bowcott, writing for The Guardian described it as a “highly unusual judgment,” but, in making the decision, the judge said, ““This position reflects the value that society places on personal autonomy in matters of medical treatment and the very long established right of the patient to choose to accept or refuse medical treatment from his or her doctor.”

The judge is correct; the right to refuse treatment is one of the bedrock principles of medical ethics. In most medical decisions, autonomy trumps all other considerations, including efficacy of possible treatment. In other words, you are not obligated to accept treatment simply because it will prolong your life. This is the newnhamm-MultiColored-Sparkle-fixed-2400pxway things work in the world of medicine, but there could be other approaches.

Given the facts of this case, it seems a suicidal person sort of “lucks out” when an unrelated medical issue arises. Unlike C, not everyone seeking death is able to find a legal way out. Those who are so physically incapacitated that they cannot possibly end their lives without help often find too many roadblocks to death to ever carry it out. Even when healthy people try to commit suicide, the rest of us are obligated to prevent it when possible. If we find someone who has taken a drug overdose, for example, we try to save him or her. If someone is trying to jump off a bridge, we try to prevent it. And if someone asks for drugs to commit suicide, only a few places in the world allow them to be prescribed.

It is clear that we do not always respect the autonomy of suicidal individuals. Even in the case of C, the judge said, “My decision that C has capacity to decide whether or not to accept dialysis does not, and should not prevent her treating doctors from continuing to seek to engage with C in an effort to persuade her of the benefits of receiving life-saving treatment in accordance with their duty to C as their patient.” The judge seems to feel that the doctors ought to continue trying to save C, even while recognizing that she has the right to refuse treatment.

Clearly, the law in this case is built around autonomy, but perhaps it shouldn’t be. Autonomy assumes a rational and unimpaired person making a fully informed decision. The judge notes that C is fully functional and has no cognitive impairments. At the same time, though, C is facing a diagnosis of breast cancer and a severely damaged self-image. It isn’t clear that she may not modify her view with a little time and, perhaps, psychotherapy.

If her mental health is impaired, she may not be fully autonomous in the first place. If she isn’t, then perhaps she needs care more than freedom. An Ethics of Care would possible guide us to respect her wishes as well as her needs. A little more time may be needed to assess whether her decision, which is not reversible, is truly the decision she wants to make. With a little time and support, she may come to believe that sparkle is still possible for her.

I also think a focus on capabilities might be relevant. An ethics focused on capabilities would try to enable her to have a fulfilling life by maximizing the abilities she still has. Care and capabilities both emerged as feminist approaches to ethics and justice. While on the surface, this may not seem to be a feminist issue, but the judge also said, “It is clear that during her life C has placed a significant premium on youth and beauty and on living a life that, in C’s words, ‘sparkles’.”

It is clear that C has operated under rather sexist values for most of her life. That is her choice, to be sure, but it might be possible to find new values. Many who have experienced crippling injuries have sought suicide only to later find their lives are valuable and meaningful even without the activities and relationships they once held dear.

Reid Ewing and the Failure of Autonomy in Bioethics

Reid Ewing of Modern Family fame recently wrote publicly about his struggle with body dysmorphia in a personal essay on the Huffington Post. Ewing revealed that his dysmorphia led him to seek and receive several surgeries. He feels his surgeons should have recognized his mental illness and refused to perform surgery. He wrote, “Of the four doctors who worked on me, not one had mental health screenings in place for their patients, except for asking if I had a history of depression.”

The principle of autonomy is by far the most discussed principle of bioethics. Discussions typically focus on the rights of patients to refuse treatments, not to seek them. On either side, the issues can be thorny. If a depressed and suicidal patient refuses life-prolonging treatment, is it ethical to respect the patient’s autonomy or should mental health services be provided first? As in Ewing’s case, the ethical problem arises from the claim that the decision is driven by mental illness and not reason. If someone is mentally ill, they are not fully autonomous agents as they are not fully rational.

This is a problem with autonomy in general. Our ideas of autonomy come largely from Immanuel Kant, who claimed that all rational beings, operating under full autonomy, would choose the same universal moral laws. If someone thinks it is okay to kill or lie, the person is either not johnny-automatic-gloved-hand-with-scalpel-800pxrational or lacks a good will. How do we determine whether someone is rational? Usually, most of us assume people who agree with our decisions are rational and those who do not are not rational. If they are not rational, they are not autonomous, so it is ethical to intervene to care for and protect them.

Earlier this year, a woman named Jewel Shuping claimed a psychologist helped her blind herself. She says she has always suffered from Body Integrity Identity Disorder (although able-bodied, she identified as a person with a disability). Most doctors, understandably, refuse to help people damage their healthy bodies to become disabled, which can lead clients to desperate measures to destroy limbs or other body parts, sometimes possibly endangering others.

Jewel Shuping never named the psychologist who may have helped her, so it is impossible to check the story. It is possible to imagine, however, that some doctors would help someone with BIID in the hopes of preventing further damage to themselves or others. Shuping says she feels she should be living as a blind person, and she appreciates the help she received to become blind. In contrast, Ewing feels he should have undergone a mental health screening before he was able to obtain his surgery and that his wishes should not have been respected.

Plastic surgeons are often vilified as greedy and unscrupulous doctors who will destroy clients’ self-esteem only to profit from their self-loathing. On the other hand, these same plastic surgeons are hailed as heroes when they are able to restore beauty to someone who has been disfigured in an accident or by disease. Unfortunately, we do not have bright lines to separate needless surgery to enhance someone’s self image and restorative surgery to spare someone from a life of social isolation and shame. Some would argue the decision should not be up to the doctors in the first place but should be left in the autonomous hands of clients.

Many have similarly argued that doctors should refuse gender confirmation surgery to transgender men and women. As with BIID, many assume that transgender individuals are mentally ill and should see a mental health professional, not a surgeon. Transgender activists (and I) argue that transgender individuals need empowerment to live as the gender that best fits what they actually are. If surgery helps them along that path, they should have access.

All this leaves us with the question of when to respect autonomy and when to take the role of caregiver, which may involve a degree of paternalism (or maternalism for that matter). Is it more important for doctors who ensure the patient’s rights to seek whatever treatment they see fit, or is it more important to provide a caring and guiding hand to resolve underlying mental health issues before offering any treatment at all?

One of Ewing’s complaints is that he was offered plastic surgery on demand with no screening at all. The process for people seeking gender confirmation surgery, by contrast, is arduous. Before surgery, transgender people go through counseling and live as their true gender for an extended period of time. At the far end of the spectrum, people with BIID rarely find doctors willing to help them destroy parts of their bodies and resort to self-harm. These three cases are not the same, but make similar demands on the distinctions between respect for autonomy and a commitment to compassionate care.

It seems reasonable to accept Ewing’s claim that mental health screenings should be a part of body modification surgery, especially when someone has no obvious flaws that need to be repaired. In all these cases (dysmorphia, gender identity, and BIID), mental health support is necessary. In each case, patients describe depression, emotional turmoil, and, too often, thoughts or attempts of suicide. Mental health care does not require a violation of autonomy, but it may help a person’s autonomous decisions to form more clearly from deliberation and not desperation.

 

Suffragette, Slavery, and the Appropriation of Suffering

Controversy erupted recently over a photo shoot in which the stars of the movie, Suffragette, wore t-shirts that said, “I’d rather be a rebel than a slave.” A group of white women wearing a shirt with a message comparing themselves to slaves was a problem to begin with, but people familiar with the fact that southern defenders of slavery in the US are known as Rebels only made things worse.

Defenders of the movie, the photo shoot, and the quote said the outrage was based on a misunderstanding of the quote, which comes from a speech by the British suffragette, Emmeline Pankhurst, rallying women to free themselves from the oppression of patriarchy. In the United States, abolitionists and suffragettes were sometimes, though not nearly always, the same people. The comparison of slavery to women’s oppression was noted by many, including former slave Frederick Douglas, who wrote, “In respect to political rights, we hold woman to be justly entitled to all we claim for man. We go farther, and express our conviction that all political rights which it is expedient for man to exercise, it is equally so for women.”

In the UK, people are less sensitive to comments about slavery and rebels. Some have suggested that the UK did not have slaves and that the quote is therefore not offensive. Time Out London, which published the photos, said in a statement: “Time Out published the original feature online and in print in the UK a week ago. The context of the photoshoot and the feature were absolutely clear to readers who read the piece. It has been read by at least half a million people in the UK and we have received no complaints.”

The UK does have a history with slavery, though. Unlike the US, Britain did not have a large workforce of slaves, but that doesn’t mean the UK had no involvement in slavery. Slavery was abolished in the UK in 1833 by the Slavery Abolition Act, which ended slavery throughout the British Empire with the exception of territories under control of the East India Company, Ceylon, and the island of Saint Helena. The exceptions were eliminated in 1843. In the US, President Lincoln issued the Emancipation Proclamation in 1863.

Having been neither a woman nor a slave, I hesitate to comment on the controversy of the use of the Emmeline Pankhurst quote, but it turns out that philosopher Elizabeth Spelman made an insightful and relevant commentary on the issue in her 1997 book, Fruits of Sorrow: Framing Our Attention to Suffering. In the first place, she points out that phrases such as “women and minorities” excludes and ignores the existence of minority women. Comparisons to slavery are a case in point. She says,  “Consider the talk about women being treated like slaves. Whenever we talk that way we are not only making clear that the ‘women’ we’re referring to aren’t themselves slaves; we’re making it impossible to talk about how the women who weren’t slaves treated those who were.” When a white woman suffragette declared her preference for rebellion over slavery, was she honoring the suffering of slave women or, indeed, setting herself apart from them?

Drawing on the work of Jean Fagan Yellins, Spelman continues, “The female slave is made to disappear from view. Although presumably it was the female slave’s experience that originally was the focus of concern, the other women’s experiences were made the focus.” Somehow, white women made use of the suffering of slaves without experiencing the actual realities of slavery, even if the oppression of white women was intolerable, it was not an experience shared with actual slave women.

When this relationship between white suffragettes and slaves is exposed an analyzed, of course white women will want to deny their privilege and insist that they were only honoring their sisters. They can say this with great honesty, because they are not aware of their privileged status. Further, Spelman says, “The deeper privilege goes, the less self-conscious people are of the extent to which their being who they are, in their own eyes as well as the eyes of others, is dependent upon the exploitation or degradation or disadvantage of others.”

When privilege is pointed out, it makes us uncomfortable. As a result, our reaction is motivated by shame. Self-awareness is necessary to effect change, but it is also painful. Spelman says, “Seeing oneself as deeply disfigured by privilege, and desiring to do something about it, may be impossible without feeling shame.” The shame provokes a defensive reaction, but it can also help to facilitate healing and solidarity–in some cases, anyway.

With the Emmeline Pankhurst quote used by the magazine, we can see the defensive reaction. Many people defended the quote as being taken out of context, as being somehow separate from slavery because it was British, or being a victim of PC culture gone mad. In the end, though, the outrage at the use of the quote helped spark a conversation about the suffragette movement, Britain’s role in slavery, and sensitivity to women whose experiences lie outside the realm of so-called “white feminism.”

Ethics of Grief: Profiting from the Pain of Others

Imagine you and a friend go to see a documentary (or even fictional film) about the plight of victims of famine, war, disease, or oppression, and you bawl uncontrollably throughout the film as your friend sits next to you unmoved and indifferent to everything happening on the screen. You think anyone who isn’t moved by the extreme suffering you’ve just seen must be some kind of monster (or a sociopath at the least). You feel, in short, that crying is more moral than just sitting there.

You will admit, of course, that your crying through the movie didn’t help the victims any and your friend’s indifference didn’t really hurt anyone. Still, it seems that a moral person should have feelings for those who are suffering, even if you can’t find any real benefit for these strong feelings for strangers who get no benefit from your tears, heartfelt as they are.

In fact, your friend might point out that you are getting all worked up for no reason, and it might be better to keep your emotions in check. Your wailing for these strangers won’t change anything for them, but it might impair your ability to attend to problems you can change. What good are you to your children, for example, if your mind is on the poor souls in some far corner of the world? You should get your head together, friend, and get on with the business of life.

But, you counter, if you learn to be indifferent and unmoved by the pain of strangers, you may become indifferent to the pain of others, including friends and, yes, your own children. You don’t want to become the kind of monster you now suspect your friend of being. You want to be the kind of person who is moved by the suffering of others. You may not be able to help in every situation, but you do not want to become callous and cold. You want to be a caring individual. It isn’t about what you can do but about what you are.

And now your friend points out that not only did you cry during the movie, but you seemed, in some sense, to enjoy it. In fact, you apparently went to the movie with the prior intention of being moved to tears. You chose the movie because it was described as “moving” and “emotionally riveting.” Will you be happy when your children fall ill because it will satisfy your need to “let it all out”? Perhaps you are the monster, after all?

You didn’t enjoy the pain, you object, but you enjoyed the high quality of the film and its ability to elicit the pain. It was beautiful in its ability to enlarge compassion and trigger a caring response. The film will help, if nothing else, audiences develop a greater sense of concern for others, even if it doesn’t affect everyone (with a sly and disapproving nod to your friend).

And your friend now points out that people had to suffer in order to expand compassion and develop a greater caring response, so the suffering of others is used as a means to your own ends. You are actually acting selfishly after all, and the film makers are also exploiting the suffering of these people in order to teach a moral lesson and even to make a profit and perhaps sit in the spotlight after receiving coveted awards. You can just imagine the director’s teary expressions of gratitude and exhortations for a more acts of compassion at the ceremony.

In 2012, comedian Anthony Griffith told the story of his daughter’s cancer in a moving performance for The Moth. The video quickly went viral. You can see the video here:

The video on YouTube now has more than 1.8 million views. It is almost impossible to watch the video without sobbing, and people shared it by promising that anyone watching should have some tissues on hand. For reasons that aren’t entirely clear, we enjoy experiencing his grief with him. It might be objected that we are emotional voyeurs watching a sort of grief porn. By watching, we are not helping his daughter, we are not preventing future cancer deaths, we are not improving medical care, and it isn’t clear how we might be improving ourselves.

Paradoxically, we simultaneously want to avoid our own pain but glom onto the pain of others. Watching the story enables us to experience the pain without having to actually experience the loss of child. Doing this while watching a fictional account of loss seems justifiable in many ways, but to seek out a chance to cry and experience this kind of pseudo-grief that is provided by the actual grief of another person certainly raises an ethical concern.

We might say that Anthony Griffith needed to talk about his loss, and we are providing him with an audience. We are doing him a great favor by listening. We are honoring his loss. And he may agree with us. In this case, he is using us to help him along his healing journey, but this doesn’t seem to be what is going on. We want to see and hear his story. We want to be part of his grief story without having to do any heavy lifting ourselves. We watch the video, feel emotional excitement, hug our loved ones because one never knows when they will be gone, and then we are done with it.

We might say that we want to hear the story because it is well written and well performed. Griffith is extremely talented as a story teller, and we appreciate his talent and courage to share such a personal story. When we watch the video, we are paying tribute to his writing and his acting. The only problem is that he really doesn’t seem to be acting. He has merely put his pain on view for the world. He is certainly talented, and the story is well-written, but most people will be moved by anyone’s story of a lost child. It is relatively easy to evoke strong emotions with a story of intense pain and grief.

It may be that we want to hear his story so we can prepare ourselves for the times our story might be the main event. Someday we will have to do the heavy lifting. If we can live through Griffith’s pain, maybe we can face our own. By experiencing Griffith’s grief, we see that we can also face it and live through it just as he has done. We finish the video feeling somehow more prepared.

Or we may be drawn to the stories of others because it provides an evolutionary advantage. By hearing stories of others, we develop compassion and care. Other than providing an audience, we may not be helping Griffith directly, but we may be better able to empathize with others in the future. We are preparing not only for how to face our own struggles but to help others through theirs. If this is true, then we are actually doing something noble and beneficial by watching such videos.

Or, maybe we are just seeking the thrill of an emotional roller coaster ride.

Comments are welcome below. I appreciate corrections to typos and so forth (randall@ethicsbeyondcompliance.com).